From ncoghlan at gmail.com  Sun Feb  1 02:01:16 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 1 Feb 2015 11:01:16 +1000
Subject: [Python-ideas] Adding a subprocess.CompletedProcess class
In-Reply-To: <CAOvn4qiH_NScSjg0W+FXSO21jufu3upHLFcaTeqxiR85-wakxQ@mail.gmail.com>
References: <CAOvn4qgTKVTbcY+BN_Z7-v8bMRprn=2FSqcnMuw9j+NT-cegPA@mail.gmail.com>
 <CAGE7PNKQ0QpZ-3hab4xJppknhCOOCJXuRWadsKGeGFz-BbJg_A@mail.gmail.com>
 <CADiSq7dAYUudHbDzRuVxmL6SsT4i8_50jvGNPf0G2Jet7MS8Dg@mail.gmail.com>
 <CAOvn4qhH2Et49NgMMED3DQ-9GuX1k8cELGEXyegh64ezMHoqfw@mail.gmail.com>
 <20150128171318.22a829eb@anarchist.wooz.org>
 <CADiSq7etZdNe9mgjpfXO8b=r5981-vUxP3t=Kn+62n-SwW+b1w@mail.gmail.com>
 <CAOvn4qiH_NScSjg0W+FXSO21jufu3upHLFcaTeqxiR85-wakxQ@mail.gmail.com>
Message-ID: <CADiSq7cBgZzwrRZovs_Ti50WCnVtu_nwY38yt4A2jdYFTX=HMA@mail.gmail.com>

On 30 Jan 2015 07:11, "Thomas Kluyver" <thomas at kluyver.me.uk> wrote:
>
> On 29 January 2015 at 03:47, Nick Coghlan <ncoghlan at gmail.com> wrote:
>>
>> I still suspect we should be offering a simpler way to decouple the
>> creation of the pipes from the subprocess call, but I have no idea
>> what that API should look like,
>
>
> Presumably that would need some kind of object representing a
not-yet-started process. Technically, that could be Popen, but for
backwards compatibility the Popen constructor needs to start the process,
and p = Popen(..., start=False) seems inelegant.
>
> Let's imagine it's a new class called Command. Then you could start
coming up with interfaces like:
>
> c = subprocess.Command(...)
> c.stdout = fileobj
> c.stderr = fileobj2
> # Or
> c.capture('combined')  # sets stdout=PIPE and stderr=STDOUT
> # Maybe get into operator overloading?
> pipeline = c | c2
> # Could this work? Would probably require threading
> c.stdout = BytesIO()
> c.stderr_decoded = StringIO()
> # When you've finished setting things up
> c.run()  # returns a Popen instance
>
>
> N.B. This is 'thinking aloud', not any kind of proposal - I'm not
convinced by any of that API myself.

I'm personally waiting for someone to get annoyed enough to try porting
Julia's Cmd objects to Python and publish the results on PyPI :)

Cheers,
Nick.

>
> Thomas
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150201/06ea6aef/attachment.html>

From ncoghlan at gmail.com  Sun Feb  1 02:15:18 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 1 Feb 2015 11:15:18 +1000
Subject: [Python-ideas] "atexit" equivalent for functions in contextlib
In-Reply-To: <877fw5roh9.fsf@vostro.rath.org>
References: <87bnlirvar.fsf@vostro.rath.org>
 <CADiSq7eysQvKV=zirToUJt2oVWYuz21xiE+duRt+ZDv_9a8bJA@mail.gmail.com>
 <877fw5roh9.fsf@vostro.rath.org>
Message-ID: <CADiSq7c01k-jLnNr+0+jzeyDanxEqqC+5tA+53cVDUgwePW5Bw@mail.gmail.com>

On 30 Jan 2015 12:29, "Nikolaus Rath" <Nikolaus at rath.org> wrote:
>
> Nick Coghlan <ncoghlan-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> writes:
> > On 29 January 2015 at 15:49, Nikolaus Rath <
Nikolaus-BTH8mxji4b0 at public.gmane.org> wrote:
> >> It's not a lot of code, but my feeling is that not anyone who might be
> >> interested in an "on_return" functionality would be able to come up
with
> >> this right away.
> >
> > What does it gain you other than saving a level of indentation over
> > the form with an explicit context manager?
>
> Nothing. But if that's not a good enough reason, why do we have
> contextlib.ContextDecorator?

I'm occasionally inclined to consider accepting that proposal a mistake on
my part. It's certainly borderline.

That said, the main advantage that applies to ContextDecorator but not to
this proposal is lightweight addition of "before & after" behaviour like
serialisation, tracing or logging to a function - you just decorate it
appropriately without touching any other part of the function. That's a
useful thing to encourage, and it's handy that generator based context
managers can be used that way by default.

This proposal is different:

* it's inherently coupled to the function body (you need to change the
function signature, and allocate resources inside the function for later
cleanup)
* it's adding information about an implementation detail (how the function
manages resource deallocation) to a header that's ideally focused on
declaring the function's interface and externally visible behaviour

Cheers,
Nick.

>
> Best,
> -Nikolaus
>
> --
> GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
> Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
>
>              ?Time flies like an arrow, fruit flies like a Banana.?
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150201/c07477eb/attachment.html>

From wes.turner at gmail.com  Sun Feb  1 04:28:11 2015
From: wes.turner at gmail.com (Wes Turner)
Date: Sat, 31 Jan 2015 21:28:11 -0600
Subject: [Python-ideas] Adding a subprocess.CompletedProcess class
In-Reply-To: <CAOvn4qiH_NScSjg0W+FXSO21jufu3upHLFcaTeqxiR85-wakxQ@mail.gmail.com>
References: <CAOvn4qgTKVTbcY+BN_Z7-v8bMRprn=2FSqcnMuw9j+NT-cegPA@mail.gmail.com>
 <CAGE7PNKQ0QpZ-3hab4xJppknhCOOCJXuRWadsKGeGFz-BbJg_A@mail.gmail.com>
 <CADiSq7dAYUudHbDzRuVxmL6SsT4i8_50jvGNPf0G2Jet7MS8Dg@mail.gmail.com>
 <CAOvn4qhH2Et49NgMMED3DQ-9GuX1k8cELGEXyegh64ezMHoqfw@mail.gmail.com>
 <20150128171318.22a829eb@anarchist.wooz.org>
 <CADiSq7etZdNe9mgjpfXO8b=r5981-vUxP3t=Kn+62n-SwW+b1w@mail.gmail.com>
 <CAOvn4qiH_NScSjg0W+FXSO21jufu3upHLFcaTeqxiR85-wakxQ@mail.gmail.com>
Message-ID: <CACfEFw8-bmx9MbeK5k3cb5RwpZYkT6QU2Gj2_DR-rUWfS--W3A@mail.gmail.com>

On Jan 29, 2015 3:11 PM, "Thomas Kluyver" <thomas at kluyver.me.uk> wrote:
>
> On 29 January 2015 at 03:47, Nick Coghlan <ncoghlan at gmail.com> wrote:
>>
>> I still suspect we should be offering a simpler way to decouple the
>> creation of the pipes from the subprocess call, but I have no idea
>> what that API should look like,
>
>
> Presumably that would need some kind of object representing a
not-yet-started process. Technically, that could be Popen, but for
backwards compatibility the Popen constructor needs to start the process,
and p = Popen(..., start=False) seems inelegant.
>
> Let's imagine it's a new class called Command. Then you could start
coming up with interfaces like:
>
> c = subprocess.Command(...)
> c.stdout = fileobj
> c.stderr = fileobj2
> # Or
> c.capture('combined')  # sets stdout=PIPE and stderr=STDOUT
> # Maybe get into operator overloading?
> pipeline = c | c2
> # Could this work? Would probably require threading
> c.stdout = BytesIO()
> c.stderr_decoded = StringIO()
> # When you've finished setting things up
> c.run()  # returns a Popen instance
>
>
> N.B. This is 'thinking aloud', not any kind of proposal - I'm not
convinced by any of that API myself.

Here's a start at adding an 'expect_returncode' to sarge.

I really like the sarge .run() and Capture APIs:

*
http://sarge.readthedocs.org/en/latest/overview.html#why-not-just-use-subprocess
*
http://sarge.readthedocs.org/en/latest/tutorial.html#use-as-context-managers
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150131/8dde46c1/attachment-0001.html>

From wes.turner at gmail.com  Sun Feb  1 04:30:24 2015
From: wes.turner at gmail.com (Wes Turner)
Date: Sat, 31 Jan 2015 21:30:24 -0600
Subject: [Python-ideas] Adding a subprocess.CompletedProcess class
In-Reply-To: <CACfEFw8-bmx9MbeK5k3cb5RwpZYkT6QU2Gj2_DR-rUWfS--W3A@mail.gmail.com>
References: <CAOvn4qgTKVTbcY+BN_Z7-v8bMRprn=2FSqcnMuw9j+NT-cegPA@mail.gmail.com>
 <CAGE7PNKQ0QpZ-3hab4xJppknhCOOCJXuRWadsKGeGFz-BbJg_A@mail.gmail.com>
 <CADiSq7dAYUudHbDzRuVxmL6SsT4i8_50jvGNPf0G2Jet7MS8Dg@mail.gmail.com>
 <CAOvn4qhH2Et49NgMMED3DQ-9GuX1k8cELGEXyegh64ezMHoqfw@mail.gmail.com>
 <20150128171318.22a829eb@anarchist.wooz.org>
 <CADiSq7etZdNe9mgjpfXO8b=r5981-vUxP3t=Kn+62n-SwW+b1w@mail.gmail.com>
 <CAOvn4qiH_NScSjg0W+FXSO21jufu3upHLFcaTeqxiR85-wakxQ@mail.gmail.com>
 <CACfEFw8-bmx9MbeK5k3cb5RwpZYkT6QU2Gj2_DR-rUWfS--W3A@mail.gmail.com>
Message-ID: <CACfEFw96bnc_3PHJDiDkxgokp88d9=XC=ahwMVJawtbnsxyNAA@mail.gmail.com>

On Jan 31, 2015 9:28 PM, "Wes Turner" <wes.turner at gmail.com> wrote:
>
> Here's a start at adding an 'expect_returncode' to sarge.

https://bitbucket.org/westurner/sarge/commits/6cc7780f00ccba2a231c2b7c51cb2a29d287e660

https://bitbucket.org/vinay.sajip/sarge/issue/26/enh-equivalent-to-subprocesscheck_output
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150131/fee6f6eb/attachment.html>

From p.f.moore at gmail.com  Sun Feb  1 14:48:50 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Sun, 1 Feb 2015 13:48:50 +0000
Subject: [Python-ideas] Adding a subprocess.CompletedProcess class
In-Reply-To: <CAOvn4qiH_NScSjg0W+FXSO21jufu3upHLFcaTeqxiR85-wakxQ@mail.gmail.com>
References: <CAOvn4qgTKVTbcY+BN_Z7-v8bMRprn=2FSqcnMuw9j+NT-cegPA@mail.gmail.com>
 <CAGE7PNKQ0QpZ-3hab4xJppknhCOOCJXuRWadsKGeGFz-BbJg_A@mail.gmail.com>
 <CADiSq7dAYUudHbDzRuVxmL6SsT4i8_50jvGNPf0G2Jet7MS8Dg@mail.gmail.com>
 <CAOvn4qhH2Et49NgMMED3DQ-9GuX1k8cELGEXyegh64ezMHoqfw@mail.gmail.com>
 <20150128171318.22a829eb@anarchist.wooz.org>
 <CADiSq7etZdNe9mgjpfXO8b=r5981-vUxP3t=Kn+62n-SwW+b1w@mail.gmail.com>
 <CAOvn4qiH_NScSjg0W+FXSO21jufu3upHLFcaTeqxiR85-wakxQ@mail.gmail.com>
Message-ID: <CACac1F913pukSDKrWTP46M2XEcfU2j90+N_pBoagHxGS3kHSRA@mail.gmail.com>

 29 January 2015 at 21:09, Thomas Kluyver <thomas at kluyver.me.uk> wrote:
> On 29 January 2015 at 03:47, Nick Coghlan <ncoghlan at gmail.com> wrote:
>>
>> I still suspect we should be offering a simpler way to decouple the
>> creation of the pipes from the subprocess call, but I have no idea
>> what that API should look like,
>
>
> Presumably that would need some kind of object representing a
> not-yet-started process. Technically, that could be Popen, but for backwards
> compatibility the Popen constructor needs to start the process, and p =
> Popen(..., start=False) seems inelegant.

The thing I've ended up needing to do once or twice, which is
unreasonably hard with subprocess, is to run the command and capture
the output, but *still* write it to stdout/stderr. So the user gets to
see the command's progress, but you can introspect the results
afterwards.

The hard bit is to do this while still displaying the output as it
arrives. You basically need to manage the stdout and stderr pipes
yourself, do nasty multi-stream interleaving, and deal with the
encoding/universal_newlines stuff on the captured data. In a
cross-platform way :-(

Paul

From toddrjen at gmail.com  Sun Feb  1 16:13:32 2015
From: toddrjen at gmail.com (Todd)
Date: Sun, 1 Feb 2015 16:13:32 +0100
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
Message-ID: <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>

Although slices and ranges are used for different things and implemented
differently, conceptually they are similar: they define an integer sequence
with a start, stop, and step size.  For this reason I think that slices
could provide useful syntactic sugar for defining ranges without
sacrificing clarity.

The syntax I propose is simply to wrap a slice in parentheses.  For
examples, the following pairs are equivalent:

range(4, 10, 2)
(4:10:2)

range(3, 7)
(3:7)

range(5)
(:5)

This syntax should not be easily confused with any existing syntax.  Slices
of sequences use square brackets, dictionaries use curly braces, and
function annotations only appear in function declarations.  It is also
considerably more compact than the current range syntax, while still being
similar to a syntax (slicing) all python users should be familiar with.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150201/c1bca357/attachment.html>

From wes.turner at gmail.com  Sun Feb  1 17:29:05 2015
From: wes.turner at gmail.com (Wes Turner)
Date: Sun, 1 Feb 2015 10:29:05 -0600
Subject: [Python-ideas] Adding a subprocess.CompletedProcess class
In-Reply-To: <CACac1F913pukSDKrWTP46M2XEcfU2j90+N_pBoagHxGS3kHSRA@mail.gmail.com>
References: <CAOvn4qgTKVTbcY+BN_Z7-v8bMRprn=2FSqcnMuw9j+NT-cegPA@mail.gmail.com>
 <CAGE7PNKQ0QpZ-3hab4xJppknhCOOCJXuRWadsKGeGFz-BbJg_A@mail.gmail.com>
 <CADiSq7dAYUudHbDzRuVxmL6SsT4i8_50jvGNPf0G2Jet7MS8Dg@mail.gmail.com>
 <CAOvn4qhH2Et49NgMMED3DQ-9GuX1k8cELGEXyegh64ezMHoqfw@mail.gmail.com>
 <20150128171318.22a829eb@anarchist.wooz.org>
 <CADiSq7etZdNe9mgjpfXO8b=r5981-vUxP3t=Kn+62n-SwW+b1w@mail.gmail.com>
 <CAOvn4qiH_NScSjg0W+FXSO21jufu3upHLFcaTeqxiR85-wakxQ@mail.gmail.com>
 <CACac1F913pukSDKrWTP46M2XEcfU2j90+N_pBoagHxGS3kHSRA@mail.gmail.com>
Message-ID: <CACfEFw-9CuNcfG+uvaUKk+qV6wt38B7MeQfW98o_aTYVzspg4Q@mail.gmail.com>

On Sun, Feb 1, 2015 at 7:48 AM, Paul Moore <p.f.moore at gmail.com> wrote:

>  29 January 2015 at 21:09, Thomas Kluyver <thomas at kluyver.me.uk> wrote:
> > On 29 January 2015 at 03:47, Nick Coghlan <ncoghlan at gmail.com> wrote:
> >>
> >> I still suspect we should be offering a simpler way to decouple the
> >> creation of the pipes from the subprocess call, but I have no idea
> >> what that API should look like,
> >
> >
> > Presumably that would need some kind of object representing a
> > not-yet-started process.


http://sarge.readthedocs.org/en/latest/reference.html#Pipeline

p = sarge.run('sleep 60', async=True)
assert type(p) == sarge.Pipeline



> Technically, that could be Popen, but for backwards
> > compatibility the Popen constructor needs to start the process, and p =
> > Popen(..., start=False) seems inelegant.
>
> The thing I've ended up needing to do once or twice, which is
> unreasonably hard with subprocess, is to run the command and capture
> the output, but *still* write it to stdout/stderr. So the user gets to
> see the command's progress, but you can introspect the results
> afterwards.


> The hard bit is to do this while still displaying the output as it
> arrives. You basically need to manage the stdout and stderr pipes
> yourself, do nasty multi-stream interleaving, and deal with the
> encoding/universal_newlines stuff on the captured data. In a
> cross-platform way :-(
>


http://sarge.readthedocs.org/en/latest/tutorial.html#buffering-issues

 '| tee filename.log'
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150201/8d659e9f/attachment.html>

From eltoder at gmail.com  Sun Feb  1 17:37:24 2015
From: eltoder at gmail.com (Eugene Toder)
Date: Sun, 1 Feb 2015 11:37:24 -0500
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
Message-ID: <CA+KNMzkNZiVYseD3zsF788NBtgo+T3dby3KmRf9ypb_4Jt8eNQ@mail.gmail.com>

On Sun, Feb 1, 2015 at 10:13 AM, Todd <toddrjen at gmail.com> wrote:

> Although slices and ranges are used for different things and implemented
> differently, conceptually they are similar: they define an integer sequence
> with a start, stop, and step size.  For this reason I think that slices
> could provide useful syntactic sugar for defining ranges without
> sacrificing clarity.
>
> The syntax I propose is simply to wrap a slice in parentheses.  For
> examples, the following pairs are equivalent:
>
> range(4, 10, 2)
> (4:10:2)
>
> range(3, 7)
> (3:7)
>
> range(5)
> (:5)
>
> This syntax should not be easily confused with any existing syntax.
> Slices of sequences use square brackets, dictionaries use curly braces, and
> function annotations only appear in function declarations.  It is also
> considerably more compact than the current range syntax, while still being
> similar to a syntax (slicing) all python users should be familiar with.
>
It may be too late for this, but I like it. We can also elide extra
parentheses inside calls, like with generator expressions:
sum(4:10:2)

Also, using list comprehension to generator expression analogy, we can have
[4:10:2]
which produces a list instead of a range object. Other than for
completeness this does not seem very useful, though.


Eugene
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150201/34511501/attachment.html>

From steve at pearwood.info  Sun Feb  1 18:27:12 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Mon, 2 Feb 2015 04:27:12 +1100
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
Message-ID: <20150201172712.GO2498@ando.pearwood.info>

On Sun, Feb 01, 2015 at 04:13:32PM +0100, Todd wrote:
> Although slices and ranges are used for different things and implemented
> differently, conceptually they are similar: they define an integer sequence
> with a start, stop, and step size.

I'm afraid that you are mistaken. Slices do not necessarily define 
integer sequences. They are completely arbitrary, and it is up to the 
object being sliced to interpret what is or isn't valid and what the 
slice means.

py> class K(object):
...     def __getitem__(self, i):
...             print(i)
...
py> K()["abc":(2,3,4):{}]
slice('abc', (2, 3, 4), {})


In addition, the "stop" argument to a slice may be unspecified until 
it it applied to a specific sequence, while range always requires 
stop to be defined.

py> s = slice(1, None, 2)
py> "abcde"[s]
'bd'
py> "abcdefghi"[s]
'bdfh'

It isn't meaningful to define a range with an unspecified stop, but it 
is meaningful to do so for slices.


In other words, the similarity between slices and ranges is not as close 
as you think.


-- 
Steven

From masklinn at masklinn.net  Sun Feb  1 18:45:57 2015
From: masklinn at masklinn.net (Masklinn)
Date: Sun, 1 Feb 2015 18:45:57 +0100
Subject: [Python-ideas] More useful slices
In-Reply-To: <20150201172712.GO2498@ando.pearwood.info>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
Message-ID: <B843AE5A-3695-495F-85E5-BCD966FB8944@masklinn.net>

On 2015-02-01, at 18:27 , Steven D'Aprano <steve at pearwood.info> wrote:
> It isn't meaningful to define a range with an unspecified stop

Wouldn't that just be an infinite sequence, equivalent to
`itertools.count`?

That's how it works in Haskell[0] and Clojure[1].

[0] using the range notation, or Enum's methods
[1] https://clojuredocs.org/clojure.core/range

From thomas at kluyver.me.uk  Sun Feb  1 19:13:25 2015
From: thomas at kluyver.me.uk (Thomas Kluyver)
Date: Sun, 1 Feb 2015 10:13:25 -0800
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
Message-ID: <CAOvn4qhebhPvp_OOnEqwVCYKa3zL9C6xoa_g8mWtFbrwYZX93w@mail.gmail.com>

On 1 February 2015 at 07:13, Todd <toddrjen at gmail.com> wrote:

> For examples, the following pairs are equivalent:
>
> range(4, 10, 2)
> (4:10:2)
>

I think I remember a proposal somewhere to allow slice notation (defining
slice objects) outside of [] indexing. That would conflict with this idea
of having slice notation define range objects. However, if slice notation
defined slice objects and slice objects were iterable, wouldn't that have
the same benefits?

Iterating over a slice object would work like you were lazily taking that
slice from itertools.count() - i.e. iter(a:b:c) would be equivalent to
islice(count(), a, b, c). This would also mean that (3:) would have a
logical meaning without having to modify range objects to support an
optional upper bound. I don't see any logical way to iterate over a slice
defined with negative numbers (e.g. (-4:)), so presumably iter(-4:) would
raise an exception.

Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150201/4d174ea8/attachment.html>

From rosuav at gmail.com  Sun Feb  1 21:38:52 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Mon, 2 Feb 2015 07:38:52 +1100
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAOvn4qhebhPvp_OOnEqwVCYKa3zL9C6xoa_g8mWtFbrwYZX93w@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CAOvn4qhebhPvp_OOnEqwVCYKa3zL9C6xoa_g8mWtFbrwYZX93w@mail.gmail.com>
Message-ID: <CAPTjJmp18TY8OErUSjFxnvnhawXtE7PbHaHeRi0_D5BFtFQ2WA@mail.gmail.com>

On Mon, Feb 2, 2015 at 5:13 AM, Thomas Kluyver <thomas at kluyver.me.uk> wrote:
> Iterating over a slice object would work like you were lazily taking that
> slice from itertools.count() - i.e. iter(a:b:c) would be equivalent to
> islice(count(), a, b, c). This would also mean that (3:) would have a
> logical meaning without having to modify range objects to support an
> optional upper bound. I don't see any logical way to iterate over a slice
> defined with negative numbers (e.g. (-4:)), so presumably iter(-4:) would
> raise an exception.

If you're going to start defining things in terms of
itertools.count(), I'd prefer to first permit a range object to have
an infinite upper bound, and then define everything in terms of a
sliced range - range objects already support clean slicing.

Is there any particular reason for range objects to disallow infinite
bounds, or is it just that nobody's needed it?

ChrisA

From solipsis at pitrou.net  Sun Feb  1 21:46:36 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 1 Feb 2015 21:46:36 +0100
Subject: [Python-ideas] More useful slices
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CAOvn4qhebhPvp_OOnEqwVCYKa3zL9C6xoa_g8mWtFbrwYZX93w@mail.gmail.com>
 <CAPTjJmp18TY8OErUSjFxnvnhawXtE7PbHaHeRi0_D5BFtFQ2WA@mail.gmail.com>
Message-ID: <20150201214636.52462d3e@fsol>

On Mon, 2 Feb 2015 07:38:52 +1100
Chris Angelico <rosuav at gmail.com> wrote:
> On Mon, Feb 2, 2015 at 5:13 AM, Thomas Kluyver <thomas at kluyver.me.uk> wrote:
> > Iterating over a slice object would work like you were lazily taking that
> > slice from itertools.count() - i.e. iter(a:b:c) would be equivalent to
> > islice(count(), a, b, c). This would also mean that (3:) would have a
> > logical meaning without having to modify range objects to support an
> > optional upper bound. I don't see any logical way to iterate over a slice
> > defined with negative numbers (e.g. (-4:)), so presumably iter(-4:) would
> > raise an exception.
> 
> If you're going to start defining things in terms of
> itertools.count(), I'd prefer to first permit a range object to have
> an infinite upper bound, and then define everything in terms of a
> sliced range - range objects already support clean slicing.
> 
> Is there any particular reason for range objects to disallow infinite
> bounds, or is it just that nobody's needed it?

If the platform's Py_ssize_t type is large enough to represente
positive infinites, then you probably could support infinite range
objects.

Regards

Antoine.



From rosuav at gmail.com  Sun Feb  1 21:55:59 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Mon, 2 Feb 2015 07:55:59 +1100
Subject: [Python-ideas] More useful slices
In-Reply-To: <20150201214636.52462d3e@fsol>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CAOvn4qhebhPvp_OOnEqwVCYKa3zL9C6xoa_g8mWtFbrwYZX93w@mail.gmail.com>
 <CAPTjJmp18TY8OErUSjFxnvnhawXtE7PbHaHeRi0_D5BFtFQ2WA@mail.gmail.com>
 <20150201214636.52462d3e@fsol>
Message-ID: <CAPTjJmpLpXuKbRL=aVsbF=qXLh-poOL7Z11mn4UTW4s6hgG8jg@mail.gmail.com>

On Mon, Feb 2, 2015 at 7:46 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> If the platform's Py_ssize_t type is large enough to represente
> positive infinites, then you probably could support infinite range
> objects.

*headscratch* You're asking if a native integer type can represent
infinity? What kind of computers do you think we have here? Actual
Turing machines?

ChrisA

From toddrjen at gmail.com  Sun Feb  1 22:12:58 2015
From: toddrjen at gmail.com (Todd)
Date: Sun, 1 Feb 2015 22:12:58 +0100
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAOvn4qhebhPvp_OOnEqwVCYKa3zL9C6xoa_g8mWtFbrwYZX93w@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CAOvn4qhebhPvp_OOnEqwVCYKa3zL9C6xoa_g8mWtFbrwYZX93w@mail.gmail.com>
Message-ID: <CAFpSVpJ8QOYuxni7VPmt-P1ZMQeJERT4O3SD1dGQb2oEu-Jg+Q@mail.gmail.com>

On Feb 1, 2015 7:14 PM, "Thomas Kluyver" <thomas at kluyver.me.uk> wrote:
>
> On 1 February 2015 at 07:13, Todd <toddrjen at gmail.com> wrote:
>>
>> For examples, the following pairs are equivalent:
>>
>> range(4, 10, 2)
>> (4:10:2)
>
>
> I think I remember a proposal somewhere to allow slice notation (defining
slice objects) outside of [] indexing. That would conflict with this idea
of having slice notation define range objects. However, if slice notation
defined slice objects and slice objects were iterable, wouldn't that have
the same benefits?

I considered that.  I thought of three scenarios (in the following order):

Slice is iterable: this has the advantage that classes implementing slicing
could just iterate over any slice they get.  The problem here is that you
end up with two data structures with separate implementations but nearly
identical functionality. This would be unnecessary code duplication and
thus, I felt,  unlikely to be accepted.

Slice is a subclass of range: this avoids the problem of the independent
implementations.  The problem is that it would be a subclass in name only.
I could not think if any additional methods or attributes it should have
compared to conventional ranges.  The only thing it would do that ranges
currently can't is not have an end, but since that would need to be
implemented somewhere it might as well be added to the range parent class.
So I felt that would be a deal-killer.

Use slices to make ranges: this would not be as useful in classes, but it
would really only save one line at best.  It has the advantage that it
would not require ranges to support anything they can't already.  Although
you could support infinite ranges, you could just as easily just not accept
open-ended slices.

Since the first two cases essentially reduce to the third, I felt the third
approach would be the most straightforward.

I also considered two other, related issues:

Using a slice object with this syntax.  However, something like (slice_obj)
seemed like it could break existing code, and (:slice_obj) just seemed
weird.

Passing a slice to a range.  So you could do range(slice_obj), but this
seems like something that would have very little benefit and only save a
little account of code.

> Iterating over a slice object would work like you were lazily taking that
slice from itertools.count() - i.e. iter(a:b:c) would be equivalent to
islice(count(), a, b, c). This would also mean that (3:) would have a
logical meaning without having to modify range objects to support an
optional upper bound. I don't see any logical way to iterate over a slice
defined with negative numbers (e.g. (-4:)), so presumably iter(-4:) would
raise an exception.
>
> Thomas

I though
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150201/2b56bce3/attachment-0001.html>

From toddrjen at gmail.com  Sun Feb  1 22:18:33 2015
From: toddrjen at gmail.com (Todd)
Date: Sun, 1 Feb 2015 22:18:33 +0100
Subject: [Python-ideas] More useful slices
In-Reply-To: <20150201172712.GO2498@ando.pearwood.info>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
Message-ID: <CAFpSVpKAAa+EH_XCNOO2ViTJ4q+q6Hkpb4_H4f_VTNRdCvoMaw@mail.gmail.com>

On Feb 1, 2015 6:27 PM, "Steven D'Aprano" <steve at pearwood.info> wrote:
>
> On Sun, Feb 01, 2015 at 04:13:32PM +0100, Todd wrote:
> > Although slices and ranges are used for different things and implemented
> > differently, conceptually they are similar: they define an integer
sequence
> > with a start, stop, and step size.
>
> I'm afraid that you are mistaken. Slices do not necessarily define
> integer sequences. They are completely arbitrary, and it is up to the
> object being sliced to interpret what is or isn't valid and what the
> slice means.
>
> py> class K(object):
> ...     def __getitem__(self, i):
> ...             print(i)
> ...
> py> K()["abc":(2,3,4):{}]
> slice('abc', (2, 3, 4), {})
>
>
> In addition, the "stop" argument to a slice may be unspecified until
> it it applied to a specific sequence, while range always requires
> stop to be defined.
>
> py> s = slice(1, None, 2)
> py> "abcde"[s]
> 'bd'
> py> "abcdefghi"[s]
> 'bdfh'
>
> It isn't meaningful to define a range with an unspecified stop, but it
> is meaningful to do so for slices.
>
>
> In other words, the similarity between slices and ranges is not as close
> as you think.
>

Fair enough.  But it doesn't really affect my proposal.  It would just be
that this syntax only allows a subset of what you can do with slices.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150201/abde7f77/attachment.html>

From toddrjen at gmail.com  Sun Feb  1 22:21:14 2015
From: toddrjen at gmail.com (Todd)
Date: Sun, 1 Feb 2015 22:21:14 +0100
Subject: [Python-ideas] More useful slices
In-Reply-To: <20150201214636.52462d3e@fsol>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CAOvn4qhebhPvp_OOnEqwVCYKa3zL9C6xoa_g8mWtFbrwYZX93w@mail.gmail.com>
 <CAPTjJmp18TY8OErUSjFxnvnhawXtE7PbHaHeRi0_D5BFtFQ2WA@mail.gmail.com>
 <20150201214636.52462d3e@fsol>
Message-ID: <CAFpSVp+vVrm-fLP6jWG_hazyzaiXv=Bsu2bbyehEj1V5DDBRyA@mail.gmail.com>

On Feb 1, 2015 9:48 PM, "Antoine Pitrou" <solipsis at pitrou.net> wrote:
>
> On Mon, 2 Feb 2015 07:38:52 +1100
> Chris Angelico <rosuav at gmail.com> wrote:
> > On Mon, Feb 2, 2015 at 5:13 AM, Thomas Kluyver <thomas at kluyver.me.uk>
wrote:
> > > Iterating over a slice object would work like you were lazily taking
that
> > > slice from itertools.count() - i.e. iter(a:b:c) would be equivalent to
> > > islice(count(), a, b, c). This would also mean that (3:) would have a
> > > logical meaning without having to modify range objects to support an
> > > optional upper bound. I don't see any logical way to iterate over a
slice
> > > defined with negative numbers (e.g. (-4:)), so presumably iter(-4:)
would
> > > raise an exception.
> >
> > If you're going to start defining things in terms of
> > itertools.count(), I'd prefer to first permit a range object to have
> > an infinite upper bound, and then define everything in terms of a
> > sliced range - range objects already support clean slicing.
> >
> > Is there any particular reason for range objects to disallow infinite
> > bounds, or is it just that nobody's needed it?
>
> If the platform's Py_ssize_t type is large enough to represente
> positive infinites, then you probably could support infinite range
> objects.
>

Or you could just make a special case if the upper bound is None.  The
question is what would happen if you tried to get the len() of such a
range.  An exception?  -1? None?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150201/9a612b1c/attachment.html>

From toddrjen at gmail.com  Sun Feb  1 22:31:57 2015
From: toddrjen at gmail.com (Todd)
Date: Sun, 1 Feb 2015 22:31:57 +0100
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAOvn4qhebhPvp_OOnEqwVCYKa3zL9C6xoa_g8mWtFbrwYZX93w@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CAOvn4qhebhPvp_OOnEqwVCYKa3zL9C6xoa_g8mWtFbrwYZX93w@mail.gmail.com>
Message-ID: <CAFpSVpJQ9CagMRfnQHFCSwkiASUC-MNKsid-YoBtfiMTHQz2gQ@mail.gmail.com>

On Feb 1, 2015 7:14 PM, "Thomas Kluyver" <thomas at kluyver.me.uk> wrote:
>
> On 1 February 2015 at 07:13, Todd <toddrjen at gmail.com> wrote:
>>
>> For examples, the following pairs are equivalent:
>>
>> range(4, 10, 2)
>> (4:10:2)
>
>
> I think I remember a proposal somewhere to allow slice notation (defining
slice objects) outside of [] indexing. That would conflict with this idea
of having slice notation define range objects. However, if slice notation
defined slice objects and slice objects were iterable, wouldn't that have
the same benefits?
>
>

Edit: apparently this email was cut off for at least some people, so I am
resending it.  If you got the full email adjust you can disregard this one.

I considered the idea of making slices iterable.  I thought of three
scenarios (in the following order):

Slice is iterable: this has the advantage that classes implementing slicing
could just iterate over any slice they get.  The problem here is that you
end up with two data structures with separate implementations but nearly
identical functionality. This would be unnecessary code duplication and
thus, I felt,  unlikely to be accepted.

Slice is a subclass of range: this avoids the problem of the independent
implementations.  The problem is that it would be a subclass in name only.
I could not think if any additional methods or attributes it should have
compared to conventional ranges.  The only thing it would do that ranges
currently can't is not have an end, but since that would need to be
implemented somewhere it might as well be added to the range parent class.
So I felt that would be a deal-killer.

Use slices to make ranges: this would not be as useful in classes, but it
would really only save one line at best.  It has the advantage that it
would not require ranges to support anything they can't already.  Although
you could support infinite ranges, you could just as easily just not accept
open-ended slices.

Since the first two cases essentially reduce to the third, I felt the third
approach would be the most straightforward.

I also considered two other, related issues:

Using a slice object with this syntax.  However, something like (slice_obj)
seemed like it could break existing code, and (:slice_obj) just seemed
weird.

Passing a slice to a range.  So you could do range(slice_obj), but this
seems like something that would have very little benefit and only save a
little account of code.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150201/997cb5e2/attachment.html>

From greg.ewing at canterbury.ac.nz  Sun Feb  1 22:51:23 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Mon, 02 Feb 2015 10:51:23 +1300
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
Message-ID: <54CE9FDB.2010606@canterbury.ac.nz>

Todd wrote:
> Although slices and ranges are used for different things and implemented 
> differently, conceptually they are similar: they define an integer 
> sequence with a start, stop, and step size.

They behave differently in some ways, though. Consider:

 >>> list(range(9,-1,-1))
[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]

But:

 >>> x = list(range(10))
 >>> x
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
 >>> x[9:-1:-1]
[]

Would your proposed slice-ranges behave like slices or
ranges in this situation? And how will people remember
which?

-- 
Greg

From greg.ewing at canterbury.ac.nz  Sun Feb  1 22:51:41 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Mon, 02 Feb 2015 10:51:41 +1300
Subject: [Python-ideas] More useful slices
In-Reply-To: <CA+KNMzkNZiVYseD3zsF788NBtgo+T3dby3KmRf9ypb_4Jt8eNQ@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CA+KNMzkNZiVYseD3zsF788NBtgo+T3dby3KmRf9ypb_4Jt8eNQ@mail.gmail.com>
Message-ID: <54CE9FED.6040208@canterbury.ac.nz>

Eugene Toder wrote:
> It may be too late for this, but I like it. We can also elide extra 
> parentheses inside calls, like with generator expressions:
> sum(4:10:2)
> 
> Also, using list comprehension to generator expression analogy, we can have
> [4:10:2]

But putting those two analogies together, (4:10:2) on
its own should be a generator expression rather than
a range object.

-- 
Greg

From p.f.moore at gmail.com  Sun Feb  1 23:06:53 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Sun, 1 Feb 2015 22:06:53 +0000
Subject: [Python-ideas] Adding a subprocess.CompletedProcess class
In-Reply-To: <CACfEFw-9CuNcfG+uvaUKk+qV6wt38B7MeQfW98o_aTYVzspg4Q@mail.gmail.com>
References: <CAOvn4qgTKVTbcY+BN_Z7-v8bMRprn=2FSqcnMuw9j+NT-cegPA@mail.gmail.com>
 <CAGE7PNKQ0QpZ-3hab4xJppknhCOOCJXuRWadsKGeGFz-BbJg_A@mail.gmail.com>
 <CADiSq7dAYUudHbDzRuVxmL6SsT4i8_50jvGNPf0G2Jet7MS8Dg@mail.gmail.com>
 <CAOvn4qhH2Et49NgMMED3DQ-9GuX1k8cELGEXyegh64ezMHoqfw@mail.gmail.com>
 <20150128171318.22a829eb@anarchist.wooz.org>
 <CADiSq7etZdNe9mgjpfXO8b=r5981-vUxP3t=Kn+62n-SwW+b1w@mail.gmail.com>
 <CAOvn4qiH_NScSjg0W+FXSO21jufu3upHLFcaTeqxiR85-wakxQ@mail.gmail.com>
 <CACac1F913pukSDKrWTP46M2XEcfU2j90+N_pBoagHxGS3kHSRA@mail.gmail.com>
 <CACfEFw-9CuNcfG+uvaUKk+qV6wt38B7MeQfW98o_aTYVzspg4Q@mail.gmail.com>
Message-ID: <CACac1F_ftnk+D7q3V3pdhHkZS2BLHrA=eYMZd6QP44QNyiSd0A@mail.gmail.com>

On 1 February 2015 at 16:29, Wes Turner <wes.turner at gmail.com> wrote:
>> The thing I've ended up needing to do once or twice, which is
>> unreasonably hard with subprocess, is to run the command and capture
>> the output, but *still* write it to stdout/stderr. So the user gets to
>> see the command's progress, but you can introspect the results
>> afterwards.
>>
>>
>> The hard bit is to do this while still displaying the output as it
>> arrives. You basically need to manage the stdout and stderr pipes
>> yourself, do nasty multi-stream interleaving, and deal with the
>> encoding/universal_newlines stuff on the captured data. In a
>> cross-platform way :-(
>
> http://sarge.readthedocs.org/en/latest/tutorial.html#buffering-issues

I'm not sure how that's relevant. Sure, if the child buffers output
there's a delay in seeing it, but that's no more an issue than it
would be for display on the console.

>  '| tee filename.log'

There's a number of problems with this:

1. Doesn't handle stderr
2. Uses a temporary file, which I'd then need to read in the parent
process, and delete afterwards (and tidy up if there's an exception)
3. tee may not be available (specifically on Windows)
4. Uses an extra process, which is a non-trivial overhead

Paul

From steve at pearwood.info  Mon Feb  2 02:36:49 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Mon, 2 Feb 2015 12:36:49 +1100
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAFpSVpKAAa+EH_XCNOO2ViTJ4q+q6Hkpb4_H4f_VTNRdCvoMaw@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
 <CAFpSVpKAAa+EH_XCNOO2ViTJ4q+q6Hkpb4_H4f_VTNRdCvoMaw@mail.gmail.com>
Message-ID: <20150202013648.GQ2498@ando.pearwood.info>

On Sun, Feb 01, 2015 at 10:18:33PM +0100, Todd wrote:
> On Feb 1, 2015 6:27 PM, "Steven D'Aprano" <steve at pearwood.info> wrote:

> > In other words, the similarity between slices and ranges is not as close
> > as you think.
> 
> Fair enough.  But it doesn't really affect my proposal.  It would just be
> that this syntax only allows a subset of what you can do with slices.

It undercuts the justification for the proposal. Greg has also shown 
that even with purely integer values, slices and ranges behave differently.

Slice objects are not a special case of ranges, and ranges are not a 
special case of slice objects. They behave differently, have different 
restrictions on their parameters, and different purposes. As far as I 
can see, virtually the only similarity is the names of the three 
parameters: start, stop, step.

You might object that there is an important similarity that I've missed: 
slices (well, some slices) provide the same integer indexes as range 
does. That is, for at least *some* combinations of integers a, b and c, 
we have this invariant:

seq[a:b:c] == [seq[i] for i in range(a, b, c)]

But that's not a property of the slice object, that's a property of the 
sequence! Slice object merely record the start, stop, step, which can be 
arbitrary values in arbitrary order, and the sequence decides how to 
interpret it.

The docs carefully don't describe the meaning of a slice object except 
as a bundle of start, stop, step attributes:

https://docs.python.org/3/library/functions.html#slice

That's because slice objects don't have any meaning except that which 
the sequence chooses to give it. That is not true for range objects.

(By the way, the docs are wrong when they say slice objects have no 
other explicit functionality: they also have an indices method.)

The above invariant is the conventional meaning, of course, and I'm not 
suggesting that there are a lot of classes that ignore that conventional 
meaning and do something else. But slice objects exist to be a simple 
bundle of three attributes with arbitrary values, and ranges exist to be 
a linear iterable of integers. The fact that sometimes they appear to 
kind of almost overlap in meaning is incidental.


-- 
Steven

From wes.turner at gmail.com  Mon Feb  2 02:50:13 2015
From: wes.turner at gmail.com (Wes Turner)
Date: Sun, 1 Feb 2015 19:50:13 -0600
Subject: [Python-ideas] Adding a subprocess.CompletedProcess class
In-Reply-To: <CACac1F_ftnk+D7q3V3pdhHkZS2BLHrA=eYMZd6QP44QNyiSd0A@mail.gmail.com>
References: <CAOvn4qgTKVTbcY+BN_Z7-v8bMRprn=2FSqcnMuw9j+NT-cegPA@mail.gmail.com>
 <CAGE7PNKQ0QpZ-3hab4xJppknhCOOCJXuRWadsKGeGFz-BbJg_A@mail.gmail.com>
 <CADiSq7dAYUudHbDzRuVxmL6SsT4i8_50jvGNPf0G2Jet7MS8Dg@mail.gmail.com>
 <CAOvn4qhH2Et49NgMMED3DQ-9GuX1k8cELGEXyegh64ezMHoqfw@mail.gmail.com>
 <20150128171318.22a829eb@anarchist.wooz.org>
 <CADiSq7etZdNe9mgjpfXO8b=r5981-vUxP3t=Kn+62n-SwW+b1w@mail.gmail.com>
 <CAOvn4qiH_NScSjg0W+FXSO21jufu3upHLFcaTeqxiR85-wakxQ@mail.gmail.com>
 <CACac1F913pukSDKrWTP46M2XEcfU2j90+N_pBoagHxGS3kHSRA@mail.gmail.com>
 <CACfEFw-9CuNcfG+uvaUKk+qV6wt38B7MeQfW98o_aTYVzspg4Q@mail.gmail.com>
 <CACac1F_ftnk+D7q3V3pdhHkZS2BLHrA=eYMZd6QP44QNyiSd0A@mail.gmail.com>
Message-ID: <CACfEFw--Wd79_16AVJPk6MVRHuSOy117duw_zXO=GXXVFxFPYg@mail.gmail.com>

On Feb 1, 2015 4:06 PM, "Paul Moore" <p.f.moore at gmail.com> wrote:
>
> On 1 February 2015 at 16:29, Wes Turner <wes.turner at gmail.com> wrote:
> >> The thing I've ended up needing to do once or twice, which is
> >> unreasonably hard with subprocess, is to run the command and capture
> >> the output, but *still* write it to stdout/stderr. So the user gets to
> >> see the command's progress, but you can introspect the results
> >> afterwards.
> >>
> >>
> >> The hard bit is to do this while still displaying the output as it
> >> arrives. You basically need to manage the stdout and stderr pipes
> >> yourself, do nasty multi-stream interleaving, and deal with the
> >> encoding/universal_newlines stuff on the captured data. In a
> >> cross-platform way :-(
> >
> > http://sarge.readthedocs.org/en/latest/tutorial.html#buffering-issues
>
> I'm not sure how that's relevant. Sure, if the child buffers output
> there's a delay in seeing it, but that's no more an issue than it
> would be for display on the console.

Ah. The link on that page that I was looking for was probably "Using
Redirection" #using-redirection and/or " Capturing stdout and stderr from
commands" #capturing-stdout-and-stderr-from-commands
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150201/038df574/attachment.html>

From p.f.moore at gmail.com  Mon Feb  2 09:08:25 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 2 Feb 2015 08:08:25 +0000
Subject: [Python-ideas] Adding a subprocess.CompletedProcess class
In-Reply-To: <CACfEFw--Wd79_16AVJPk6MVRHuSOy117duw_zXO=GXXVFxFPYg@mail.gmail.com>
References: <CAOvn4qgTKVTbcY+BN_Z7-v8bMRprn=2FSqcnMuw9j+NT-cegPA@mail.gmail.com>
 <CAGE7PNKQ0QpZ-3hab4xJppknhCOOCJXuRWadsKGeGFz-BbJg_A@mail.gmail.com>
 <CADiSq7dAYUudHbDzRuVxmL6SsT4i8_50jvGNPf0G2Jet7MS8Dg@mail.gmail.com>
 <CAOvn4qhH2Et49NgMMED3DQ-9GuX1k8cELGEXyegh64ezMHoqfw@mail.gmail.com>
 <20150128171318.22a829eb@anarchist.wooz.org>
 <CADiSq7etZdNe9mgjpfXO8b=r5981-vUxP3t=Kn+62n-SwW+b1w@mail.gmail.com>
 <CAOvn4qiH_NScSjg0W+FXSO21jufu3upHLFcaTeqxiR85-wakxQ@mail.gmail.com>
 <CACac1F913pukSDKrWTP46M2XEcfU2j90+N_pBoagHxGS3kHSRA@mail.gmail.com>
 <CACfEFw-9CuNcfG+uvaUKk+qV6wt38B7MeQfW98o_aTYVzspg4Q@mail.gmail.com>
 <CACac1F_ftnk+D7q3V3pdhHkZS2BLHrA=eYMZd6QP44QNyiSd0A@mail.gmail.com>
 <CACfEFw--Wd79_16AVJPk6MVRHuSOy117duw_zXO=GXXVFxFPYg@mail.gmail.com>
Message-ID: <CACac1F9w-mXk6+sk9PensadBgtWefQajkqtrP2G54NyGEXD+=Q@mail.gmail.com>

On 2 February 2015 at 01:50, Wes Turner <wes.turner at gmail.com> wrote:
>> I'm not sure how that's relevant. Sure, if the child buffers output
>> there's a delay in seeing it, but that's no more an issue than it
>> would be for display on the console.
>
> Ah. The link on that page that I was looking for was probably "Using
> Redirection" #using-redirection and/or " Capturing stdout and stderr from
> commands" #capturing-stdout-and-stderr-from-commands

Yes, but again the issue here isn't capturing stdout/stderr, but
capturing them *while having them still displayed on the terminal*.
Anyway, this is starting to hijack the thread, so I'll go and have a
more detailed look at sarge to see if it does provide what I need
somehow.

Thanks,
Paul

From guettliml at thomas-guettler.de  Mon Feb  2 09:17:36 2015
From: guettliml at thomas-guettler.de (=?UTF-8?B?VGhvbWFzIEfDvHR0bGVy?=)
Date: Mon, 02 Feb 2015 09:17:36 +0100
Subject: [Python-ideas] Infinite in Python (not just in datetime)
Message-ID: <54CF32A0.50009@thomas-guettler.de>

Thank you very much, that there are some developers who support the
idea of infinite in the datetime module.


Next idea: If Python would support infinite in the core, implementing
infinite in datetime might be much easier.

What do you think?



From ncoghlan at gmail.com  Mon Feb  2 11:03:23 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 2 Feb 2015 20:03:23 +1000
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
Message-ID: <CADiSq7e37xUqrzJObk9O6gbzCdu+MnKcGn+mYuxmCARbGbuQUA@mail.gmail.com>

On 2 Feb 2015 01:14, "Todd" <toddrjen at gmail.com> wrote:
>
> Although slices and ranges are used for different things and implemented
differently, conceptually they are similar: they define an integer sequence
with a start, stop, and step size.  For this reason I think that slices
could provide useful syntactic sugar for defining ranges without
sacrificing clarity.

Why is this beneficial? We'd be replacing an easily looked up builtin name
with relatively cryptic syntax.

There's also the fact that "slice" itself is a builtin, so it's entirely
unclear that standalone usage of the slice syntax (if it was ever permitted
at all) should produce a range object instead of a slice object.

Regards,
Nick.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/641fdd7c/attachment.html>

From toddrjen at gmail.com  Mon Feb  2 11:18:37 2015
From: toddrjen at gmail.com (Todd)
Date: Mon, 2 Feb 2015 11:18:37 +0100
Subject: [Python-ideas] More useful slices
In-Reply-To: <20150202013648.GQ2498@ando.pearwood.info>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
 <CAFpSVpKAAa+EH_XCNOO2ViTJ4q+q6Hkpb4_H4f_VTNRdCvoMaw@mail.gmail.com>
 <20150202013648.GQ2498@ando.pearwood.info>
Message-ID: <CAFpSVpJzyeBv1wk3s_ReAO9WLMaq=suDNvh3vSdZP2h--c0CgQ@mail.gmail.com>

On Mon, Feb 2, 2015 at 2:36 AM, Steven D'Aprano <steve at pearwood.info> wrote:

> On Sun, Feb 01, 2015 at 10:18:33PM +0100, Todd wrote:
> > On Feb 1, 2015 6:27 PM, "Steven D'Aprano" <steve at pearwood.info> wrote:
>
> > > In other words, the similarity between slices and ranges is not as
> close
> > > as you think.
> >
> > Fair enough.  But it doesn't really affect my proposal.  It would just be
> > that this syntax only allows a subset of what you can do with slices.
>
> It undercuts the justification for the proposal. Greg has also shown
> that even with purely integer values, slices and ranges behave differently.
>
> Slice objects are not a special case of ranges, and ranges are not a
> special case of slice objects. They behave differently, have different
> restrictions on their parameters, and different purposes. As far as I
> can see, virtually the only similarity is the names of the three
> parameters: start, stop, step.
>
> You might object that there is an important similarity that I've missed:
> slices (well, some slices) provide the same integer indexes as range
> does. That is, for at least *some* combinations of integers a, b and c,
> we have this invariant:
>
> seq[a:b:c] == [seq[i] for i in range(a, b, c)]
>
> But that's not a property of the slice object, that's a property of the
> sequence! Slice object merely record the start, stop, step, which can be
> arbitrary values in arbitrary order, and the sequence decides how to
> interpret it.
>
> The docs carefully don't describe the meaning of a slice object except
> as a bundle of start, stop, step attributes:
>
> https://docs.python.org/3/library/functions.html#slice
>
> That's because slice objects don't have any meaning except that which
> the sequence chooses to give it. That is not true for range objects.
>
> (By the way, the docs are wrong when they say slice objects have no
> other explicit functionality: they also have an indices method.)
>
> The above invariant is the conventional meaning, of course, and I'm not
> suggesting that there are a lot of classes that ignore that conventional
> meaning and do something else. But slice objects exist to be a simple
> bundle of three attributes with arbitrary values, and ranges exist to be
> a linear iterable of integers. The fact that sometimes they appear to
> kind of almost overlap in meaning is incidental.
>
>
Then let's ignore the original justification, and say this is just a syntax
for making ranges that is similar to, but not identical to, slices.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/ec537e45/attachment.html>

From toddrjen at gmail.com  Mon Feb  2 11:19:30 2015
From: toddrjen at gmail.com (Todd)
Date: Mon, 2 Feb 2015 11:19:30 +0100
Subject: [Python-ideas] More useful slices
In-Reply-To: <54CE9FED.6040208@canterbury.ac.nz>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CA+KNMzkNZiVYseD3zsF788NBtgo+T3dby3KmRf9ypb_4Jt8eNQ@mail.gmail.com>
 <54CE9FED.6040208@canterbury.ac.nz>
Message-ID: <CAFpSVpKjWjxnwaUBps8ua6wtXWD9QV4zwD=bAGWbE1DQMcnZdA@mail.gmail.com>

On Sun, Feb 1, 2015 at 10:51 PM, Greg Ewing <greg.ewing at canterbury.ac.nz>
wrote:

> Eugene Toder wrote:
>
>> It may be too late for this, but I like it. We can also elide extra
>> parentheses inside calls, like with generator expressions:
>> sum(4:10:2)
>>
>> Also, using list comprehension to generator expression analogy, we can
>> have
>> [4:10:2]
>>
>
> But putting those two analogies together, (4:10:2) on
> its own should be a generator expression rather than
> a range object.
>
>
Yes, I also considered that approach, but decided against proposing it for
that reason.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/4a225c52/attachment.html>

From toddrjen at gmail.com  Mon Feb  2 11:22:26 2015
From: toddrjen at gmail.com (Todd)
Date: Mon, 2 Feb 2015 11:22:26 +0100
Subject: [Python-ideas] More useful slices
In-Reply-To: <54CE9FDB.2010606@canterbury.ac.nz>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <54CE9FDB.2010606@canterbury.ac.nz>
Message-ID: <CAFpSVpK6mTh9rqNxCZajGnmQhTOHFZcb7GDm5XAsYRw+tgpp4w@mail.gmail.com>

On Sun, Feb 1, 2015 at 10:51 PM, Greg Ewing <greg.ewing at canterbury.ac.nz>
wrote:

> Todd wrote:
>
>> Although slices and ranges are used for different things and implemented
>> differently, conceptually they are similar: they define an integer sequence
>> with a start, stop, and step size.
>>
>
> They behave differently in some ways, though. Consider:
>
> >>> list(range(9,-1,-1))
> [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
>
> But:
>
> >>> x = list(range(10))
> >>> x
> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
> >>> x[9:-1:-1]
> []
>
> Would your proposed slice-ranges behave like slices or
> ranges in this situation? And how will people remember
> which?
>


Since this is really a literal for making ranges, I would say the range
syntax would be the correct one.

As I replied to Steven, let's forget the idea that slices and ranges are
the same.  I was mistaken about that.  Let's just call this a range
literal.  It has some features similar to slices, but not all.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/093e18d9/attachment.html>

From toddrjen at gmail.com  Mon Feb  2 11:26:02 2015
From: toddrjen at gmail.com (Todd)
Date: Mon, 2 Feb 2015 11:26:02 +0100
Subject: [Python-ideas] More useful slices
In-Reply-To: <CADiSq7e37xUqrzJObk9O6gbzCdu+MnKcGn+mYuxmCARbGbuQUA@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CADiSq7e37xUqrzJObk9O6gbzCdu+MnKcGn+mYuxmCARbGbuQUA@mail.gmail.com>
Message-ID: <CAFpSVpLXnX3AiKBgr0kfbMzKRuNqqgm043zdw1JnRJAtGStSFg@mail.gmail.com>

On Mon, Feb 2, 2015 at 11:03 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:

>
> On 2 Feb 2015 01:14, "Todd" <toddrjen at gmail.com> wrote:
> >
> > Although slices and ranges are used for different things and implemented
> differently, conceptually they are similar: they define an integer sequence
> with a start, stop, and step size.  For this reason I think that slices
> could provide useful syntactic sugar for defining ranges without
> sacrificing clarity.
>
> Why is this beneficial? We'd be replacing an easily looked up builtin name
> with relatively cryptic syntax.
>
>

First, it wouldn't be a replacement.  The existing range syntax would still
exist.

But the reason it is beneficial is the same reason we have [a, b, c] for
list, {a:1, b:2, c:3} for dicts, {a, b, c} for sets, and (a, b, c) for
tuples.  It is more compact way to create a commonly-used data structure.

And I wouldn't consider it any more cryptic than any other literal we
have.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/6ab71733/attachment-0001.html>

From steve at pearwood.info  Mon Feb  2 11:29:49 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Mon, 2 Feb 2015 21:29:49 +1100
Subject: [Python-ideas] Infinite in Python (not just in datetime)
In-Reply-To: <54CF32A0.50009@thomas-guettler.de>
References: <54CF32A0.50009@thomas-guettler.de>
Message-ID: <20150202102948.GR2498@ando.pearwood.info>

On Mon, Feb 02, 2015 at 09:17:36AM +0100, Thomas G?ttler wrote:
> Thank you very much, that there are some developers who support the
> idea of infinite in the datetime module.
> 
> 
> Next idea: If Python would support infinite in the core, implementing
> infinite in datetime might be much easier.

I don't understand what that means. What does it mean for Python to 
support infinite in the core?

Infinite what? Infinite string? Infinite list? What would this infinite 
do, what methods would it have?

Python already supports float infinities and Decimal infinities. Does 
that help you?

-- 
Steve

From rosuav at gmail.com  Mon Feb  2 11:53:59 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Mon, 2 Feb 2015 21:53:59 +1100
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAFpSVpLXnX3AiKBgr0kfbMzKRuNqqgm043zdw1JnRJAtGStSFg@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CADiSq7e37xUqrzJObk9O6gbzCdu+MnKcGn+mYuxmCARbGbuQUA@mail.gmail.com>
 <CAFpSVpLXnX3AiKBgr0kfbMzKRuNqqgm043zdw1JnRJAtGStSFg@mail.gmail.com>
Message-ID: <CAPTjJmqbKD5L=DfmyhDDnS4ttEoZLJ+Onr9fHG7wcpu6yUqdAQ@mail.gmail.com>

On Mon, Feb 2, 2015 at 9:26 PM, Todd <toddrjen at gmail.com> wrote:
> First, it wouldn't be a replacement.  The existing range syntax would still
> exist.
>
> But the reason it is beneficial is the same reason we have [a, b, c] for
> list, {a:1, b:2, c:3} for dicts, {a, b, c} for sets, and (a, b, c) for
> tuples.  It is more compact way to create a commonly-used data structure.
>
> And I wouldn't consider it any more cryptic than any other literal we have.

Considering the single most common use of ranges, let's see how a for
loop would look:

for i in 1:10:
    pass

Is that nastily cryptic, or beautifully clean? I'm inclined toward the
former, but could be persuaded.

ChrisA

From toddrjen at gmail.com  Mon Feb  2 12:19:17 2015
From: toddrjen at gmail.com (Todd)
Date: Mon, 2 Feb 2015 12:19:17 +0100
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAPTjJmqbKD5L=DfmyhDDnS4ttEoZLJ+Onr9fHG7wcpu6yUqdAQ@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CADiSq7e37xUqrzJObk9O6gbzCdu+MnKcGn+mYuxmCARbGbuQUA@mail.gmail.com>
 <CAFpSVpLXnX3AiKBgr0kfbMzKRuNqqgm043zdw1JnRJAtGStSFg@mail.gmail.com>
 <CAPTjJmqbKD5L=DfmyhDDnS4ttEoZLJ+Onr9fHG7wcpu6yUqdAQ@mail.gmail.com>
Message-ID: <CAFpSVpJ-+T_3mFFXh1RLXkvrLH2w=cKU8sU3nhp7yHtor+rbTA@mail.gmail.com>

On Mon, Feb 2, 2015 at 11:53 AM, Chris Angelico <rosuav at gmail.com> wrote:

> On Mon, Feb 2, 2015 at 9:26 PM, Todd <toddrjen at gmail.com> wrote:
> > First, it wouldn't be a replacement.  The existing range syntax would
> still
> > exist.
> >
> > But the reason it is beneficial is the same reason we have [a, b, c] for
> > list, {a:1, b:2, c:3} for dicts, {a, b, c} for sets, and (a, b, c) for
> > tuples.  It is more compact way to create a commonly-used data structure.
> >
> > And I wouldn't consider it any more cryptic than any other literal we
> have.
>
> Considering the single most common use of ranges, let's see how a for
> loop would look:
>
> for i in 1:10:
>     pass
>
> Is that nastily cryptic, or beautifully clean? I'm inclined toward the
> former, but could be persuaded.
>
>
>
It would be

for i in (1:10):
   pass

Anyone who uses sequences should be familiar with this sort of notation:

for i in spam[1:10]:
  pass

So should have at least some understanding that 1:10 can mean "1 to 10, not
including 10".

I don't see it being any more cryptic than {'a': 1, 'b': 2} meaning
dict(a=1, b=2).  On the contrary, I think the dict literal syntax is even
more cryptic, since it has no similarity to any other syntax.  The syntax I
propose here is at least similar (although not identical) to slicing.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/f22a7112/attachment.html>

From ncoghlan at gmail.com  Mon Feb  2 12:26:25 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 2 Feb 2015 21:26:25 +1000
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAPTjJmqbKD5L=DfmyhDDnS4ttEoZLJ+Onr9fHG7wcpu6yUqdAQ@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CADiSq7e37xUqrzJObk9O6gbzCdu+MnKcGn+mYuxmCARbGbuQUA@mail.gmail.com>
 <CAFpSVpLXnX3AiKBgr0kfbMzKRuNqqgm043zdw1JnRJAtGStSFg@mail.gmail.com>
 <CAPTjJmqbKD5L=DfmyhDDnS4ttEoZLJ+Onr9fHG7wcpu6yUqdAQ@mail.gmail.com>
Message-ID: <CADiSq7fZ5HG4sUJ3=D5+uNS-K56=D6NSa7CbVk4x+Kao2QrC4w@mail.gmail.com>

On 2 Feb 2015 21:00, "Chris Angelico" <rosuav at gmail.com> wrote:
>
> On Mon, Feb 2, 2015 at 9:26 PM, Todd <toddrjen at gmail.com> wrote:
> > First, it wouldn't be a replacement.  The existing range syntax would
still
> > exist.
> >
> > But the reason it is beneficial is the same reason we have [a, b, c] for
> > list, {a:1, b:2, c:3} for dicts, {a, b, c} for sets, and (a, b, c) for
> > tuples.  It is more compact way to create a commonly-used data
structure.
> >
> > And I wouldn't consider it any more cryptic than any other literal we
have.
>
> Considering the single most common use of ranges, let's see how a for
> loop would look:
>
> for i in 1:10:
>     pass
>
> Is that nastily cryptic, or beautifully clean? I'm inclined toward the
> former, but could be persuaded.

Some additional historical context, found while looking for range literal
notations in other languages: PEP 204 is a previously rejected proposal for
range literals (https://www.python.org/dev/peps/pep-0204/)

If you'd like to propose a range literal, you may be better off steering
clear of slice notation and instead look to other languages like Ruby or
Haskell for inspiration (slice notation may still be useful for appending a
step to the range definition).

Such an idea will likely still be rejected in the absence of compelling use
cases though, especially as direct iteration over a container or using the
enumerate builtin is often going to be superior to creating a range object.
That makes it very difficult to justify the cost of invalidating all of the
existing documentation on how to create ranges in Python.

Regards,
Nick.

>
> ChrisA
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/c9430030/attachment.html>

From rob.cliffe at btinternet.com  Mon Feb  2 13:10:32 2015
From: rob.cliffe at btinternet.com (Rob Cliffe)
Date: Mon, 02 Feb 2015 12:10:32 +0000
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAFpSVpJ-+T_3mFFXh1RLXkvrLH2w=cKU8sU3nhp7yHtor+rbTA@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CADiSq7e37xUqrzJObk9O6gbzCdu+MnKcGn+mYuxmCARbGbuQUA@mail.gmail.com>
 <CAFpSVpLXnX3AiKBgr0kfbMzKRuNqqgm043zdw1JnRJAtGStSFg@mail.gmail.com>
 <CAPTjJmqbKD5L=DfmyhDDnS4ttEoZLJ+Onr9fHG7wcpu6yUqdAQ@mail.gmail.com>
 <CAFpSVpJ-+T_3mFFXh1RLXkvrLH2w=cKU8sU3nhp7yHtor+rbTA@mail.gmail.com>
Message-ID: <54CF6938.30006@btinternet.com>


On 02/02/2015 11:19, Todd wrote:
>
> On Mon, Feb 2, 2015 at 11:53 AM, Chris Angelico <rosuav at gmail.com 
> <mailto:rosuav at gmail.com>> wrote:
>
>     On Mon, Feb 2, 2015 at 9:26 PM, Todd <toddrjen at gmail.com
>     <mailto:toddrjen at gmail.com>> wrote:
>     > First, it wouldn't be a replacement.  The existing range syntax
>     would still
>     > exist.
>     >
>     > But the reason it is beneficial is the same reason we have [a,
>     b, c] for
>     > list, {a:1, b:2, c:3} for dicts, {a, b, c} for sets, and (a, b,
>     c) for
>     > tuples. 
>
Well, we have to have *some* syntax for literal lists, dicts etc.
But we already have range, so there is no compelling need to add new syntax.

Having said that, I would have a sneaking admiration for a really 
concise syntax.
Perhaps if we had "Python without colons", we could write
     for i in 1 : 10
     for i in 1 : 10 : 2
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/8748c37a/attachment.html>

From steve at pearwood.info  Mon Feb  2 13:37:13 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Mon, 2 Feb 2015 23:37:13 +1100
Subject: [Python-ideas] More useful slices
In-Reply-To: <54CF6938.30006@btinternet.com>
References: <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CADiSq7e37xUqrzJObk9O6gbzCdu+MnKcGn+mYuxmCARbGbuQUA@mail.gmail.com>
 <CAFpSVpLXnX3AiKBgr0kfbMzKRuNqqgm043zdw1JnRJAtGStSFg@mail.gmail.com>
 <CAPTjJmqbKD5L=DfmyhDDnS4ttEoZLJ+Onr9fHG7wcpu6yUqdAQ@mail.gmail.com>
 <CAFpSVpJ-+T_3mFFXh1RLXkvrLH2w=cKU8sU3nhp7yHtor+rbTA@mail.gmail.com>
 <54CF6938.30006@btinternet.com>
Message-ID: <20150202123713.GS2498@ando.pearwood.info>

On Mon, Feb 02, 2015 at 12:10:32PM +0000, Rob Cliffe wrote:

> Having said that, I would have a sneaking admiration for a really 
> concise syntax.

I'm partial to 

    for i in 1...10: pass

myself.


-- 
Steve

From toddrjen at gmail.com  Mon Feb  2 13:38:46 2015
From: toddrjen at gmail.com (Todd)
Date: Mon, 2 Feb 2015 13:38:46 +0100
Subject: [Python-ideas] More useful slices
In-Reply-To: <54CF6938.30006@btinternet.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CADiSq7e37xUqrzJObk9O6gbzCdu+MnKcGn+mYuxmCARbGbuQUA@mail.gmail.com>
 <CAFpSVpLXnX3AiKBgr0kfbMzKRuNqqgm043zdw1JnRJAtGStSFg@mail.gmail.com>
 <CAPTjJmqbKD5L=DfmyhDDnS4ttEoZLJ+Onr9fHG7wcpu6yUqdAQ@mail.gmail.com>
 <CAFpSVpJ-+T_3mFFXh1RLXkvrLH2w=cKU8sU3nhp7yHtor+rbTA@mail.gmail.com>
 <54CF6938.30006@btinternet.com>
Message-ID: <CAFpSVpKN97_WjDJaQWvqS3WVj_6Cx_1tWFCQ56+AtP_C8RLiAQ@mail.gmail.com>

On Mon, Feb 2, 2015 at 1:10 PM, Rob Cliffe <rob.cliffe at btinternet.com>
wrote:

>
> On 02/02/2015 11:19, Todd wrote:
>
>
> On Mon, Feb 2, 2015 at 11:53 AM, Chris Angelico <rosuav at gmail.com> wrote:
>
>> On Mon, Feb 2, 2015 at 9:26 PM, Todd <toddrjen at gmail.com> wrote:
>> > First, it wouldn't be a replacement.  The existing range syntax would
>> still
>> > exist.
>> >
>> > But the reason it is beneficial is the same reason we have [a, b, c] for
>> > list, {a:1, b:2, c:3} for dicts, {a, b, c} for sets, and (a, b, c) for
>> > tuples.
>
>   Well, we have to have *some* syntax for literal lists, dicts etc.
> But we already have range, so there is no compelling need to add new
> syntax.
>

Why do we need literals at all?  They are just syntactic sugar.  Python
went a long time without a set literal.


>
> Having said that, I would have a sneaking admiration for a really concise
> syntax.
> Perhaps if we had "Python without colons", we could write
>     for i in 1 : 10
>     for i in 1 : 10 : 2
>
>
Part of my goal was to avoid any ambiguity with any existing syntax.  Since
we do have colons in for loops, I felt that some sort of grouping (either
[], {}, or ()) was necessary to avoid ambiguity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/8d95626c/attachment-0001.html>

From rob.cliffe at btinternet.com  Mon Feb  2 13:43:54 2015
From: rob.cliffe at btinternet.com (Rob Cliffe)
Date: Mon, 02 Feb 2015 12:43:54 +0000
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAFpSVpKN97_WjDJaQWvqS3WVj_6Cx_1tWFCQ56+AtP_C8RLiAQ@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CADiSq7e37xUqrzJObk9O6gbzCdu+MnKcGn+mYuxmCARbGbuQUA@mail.gmail.com>
 <CAFpSVpLXnX3AiKBgr0kfbMzKRuNqqgm043zdw1JnRJAtGStSFg@mail.gmail.com>
 <CAPTjJmqbKD5L=DfmyhDDnS4ttEoZLJ+Onr9fHG7wcpu6yUqdAQ@mail.gmail.com>
 <CAFpSVpJ-+T_3mFFXh1RLXkvrLH2w=cKU8sU3nhp7yHtor+rbTA@mail.gmail.com>
 <54CF6938.30006@btinternet.com>
 <CAFpSVpKN97_WjDJaQWvqS3WVj_6Cx_1tWFCQ56+AtP_C8RLiAQ@mail.gmail.com>
Message-ID: <54CF710A.6000601@btinternet.com>


On 02/02/2015 12:38, Todd wrote:
> On Mon, Feb 2, 2015 at 1:10 PM, Rob Cliffe <rob.cliffe at btinternet.com 
> <mailto:rob.cliffe at btinternet.com>> wrote:
>
>
>     On 02/02/2015 11:19, Todd wrote:
>>
>>     On Mon, Feb 2, 2015 at 11:53 AM, Chris Angelico <rosuav at gmail.com
>>     <mailto:rosuav at gmail.com>> wrote:
>>
>>         On Mon, Feb 2, 2015 at 9:26 PM, Todd <toddrjen at gmail.com
>>         <mailto:toddrjen at gmail.com>> wrote:
>>         > First, it wouldn't be a replacement. The existing range
>>         syntax would still
>>         > exist.
>>         >
>>         > But the reason it is beneficial is the same reason we have
>>         [a, b, c] for
>>         > list, {a:1, b:2, c:3} for dicts, {a, b, c} for sets, and
>>         (a, b, c) for
>>         > tuples. 
>>
>     Well, we have to have *some* syntax for literal lists, dicts etc.
>     But we already have range, so there is no compelling need to add
>     new syntax.
>
>
> Why do we need literals at all?  They are just syntactic sugar.  
> Python went a long time without a set literal.
Well, if you'd rather write
     L = list()
     L.add('foo')
     L.add('bar')
     L.add('baz')
than
     L = ['foo', 'bar', 'baz']
then good luck to you.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/47a7ee46/attachment.html>

From jmcs at jsantos.eu  Mon Feb  2 13:48:33 2015
From: jmcs at jsantos.eu (=?UTF-8?B?Sm/Do28gU2FudG9z?=)
Date: Mon, 2 Feb 2015 13:48:33 +0100
Subject: [Python-ideas] More useful slices
In-Reply-To: <54CF710A.6000601@btinternet.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CADiSq7e37xUqrzJObk9O6gbzCdu+MnKcGn+mYuxmCARbGbuQUA@mail.gmail.com>
 <CAFpSVpLXnX3AiKBgr0kfbMzKRuNqqgm043zdw1JnRJAtGStSFg@mail.gmail.com>
 <CAPTjJmqbKD5L=DfmyhDDnS4ttEoZLJ+Onr9fHG7wcpu6yUqdAQ@mail.gmail.com>
 <CAFpSVpJ-+T_3mFFXh1RLXkvrLH2w=cKU8sU3nhp7yHtor+rbTA@mail.gmail.com>
 <54CF6938.30006@btinternet.com>
 <CAFpSVpKN97_WjDJaQWvqS3WVj_6Cx_1tWFCQ56+AtP_C8RLiAQ@mail.gmail.com>
 <54CF710A.6000601@btinternet.com>
Message-ID: <CAH_XWH0d-eBf6TDdGSkgK=jU-v3mq_BiKGwY01At5k6ye0-VAA@mail.gmail.com>

 L = ['foo', 'bar', 'baz'] is syntactic sugar, you can write L =
list('foo', 'bar', 'baz')

On 2 February 2015 at 13:43, Rob Cliffe <rob.cliffe at btinternet.com> wrote:

>
> On 02/02/2015 12:38, Todd wrote:
>
>  On Mon, Feb 2, 2015 at 1:10 PM, Rob Cliffe <rob.cliffe at btinternet.com>
> wrote:
>
>>
>> On 02/02/2015 11:19, Todd wrote:
>>
>>
>> On Mon, Feb 2, 2015 at 11:53 AM, Chris Angelico <rosuav at gmail.com> wrote:
>>
>>> On Mon, Feb 2, 2015 at 9:26 PM, Todd <toddrjen at gmail.com> wrote:
>>> > First, it wouldn't be a replacement.  The existing range syntax would
>>> still
>>> > exist.
>>> >
>>> > But the reason it is beneficial is the same reason we have [a, b, c]
>>> for
>>> > list, {a:1, b:2, c:3} for dicts, {a, b, c} for sets, and (a, b, c) for
>>> > tuples.
>>
>>    Well, we have to have *some* syntax for literal lists, dicts etc.
>> But we already have range, so there is no compelling need to add new
>> syntax.
>>
>
>  Why do we need literals at all?  They are just syntactic sugar.  Python
> went a long time without a set literal.
>
> Well, if you'd rather write
>     L = list()
>     L.add('foo')
>     L.add('bar')
>     L.add('baz')
> than
>     L = ['foo', 'bar', 'baz']
> then good luck to you.
>
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/a16242cd/attachment.html>

From p.f.moore at gmail.com  Mon Feb  2 13:48:51 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 2 Feb 2015 12:48:51 +0000
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAFpSVpK6mTh9rqNxCZajGnmQhTOHFZcb7GDm5XAsYRw+tgpp4w@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <54CE9FDB.2010606@canterbury.ac.nz>
 <CAFpSVpK6mTh9rqNxCZajGnmQhTOHFZcb7GDm5XAsYRw+tgpp4w@mail.gmail.com>
Message-ID: <CACac1F8DcE+8+wDd=x9mWkTX9j6Ox7t_pdh3p0Bb6va3gMBYqA@mail.gmail.com>

On 2 February 2015 at 10:22, Todd <toddrjen at gmail.com> wrote:
> Let's just call this a range literal.  It has some features similar to
> slices, but not all.

So what would d[1:10:2] mean? d[Slice(1,10,2)] or d[range(1,10,2)]?

At the moment 1:10:2 is a slice literal, and you're proposing to
repurpose it as a range literal? What have I missed? Because that's
never going to satisfy backward compatibility requirements.

Paul

From mertz at gnosis.cx  Mon Feb  2 13:57:24 2015
From: mertz at gnosis.cx (David Mertz)
Date: Mon, 2 Feb 2015 16:57:24 +0400
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAFpSVpJ-+T_3mFFXh1RLXkvrLH2w=cKU8sU3nhp7yHtor+rbTA@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CADiSq7e37xUqrzJObk9O6gbzCdu+MnKcGn+mYuxmCARbGbuQUA@mail.gmail.com>
 <CAFpSVpLXnX3AiKBgr0kfbMzKRuNqqgm043zdw1JnRJAtGStSFg@mail.gmail.com>
 <CAPTjJmqbKD5L=DfmyhDDnS4ttEoZLJ+Onr9fHG7wcpu6yUqdAQ@mail.gmail.com>
 <CAFpSVpJ-+T_3mFFXh1RLXkvrLH2w=cKU8sU3nhp7yHtor+rbTA@mail.gmail.com>
Message-ID: <CAEbHw4buHLn=1Bi5nntpGgMnPgOs1n+2yyb51Sn3s5H-oDH5aw@mail.gmail.com>

> It would be
>
> for i in (1:10):
>    pass

If you can spare one character you can define a function similar to take
that takes a slice object as agreement. Then call it like:

for i in I(1:10):
    # do stuff

No change in syntax needed, probably 6 lines in the function definition.

I chose 'I' for Integers, and because it looks small. You could use a
different one letter name.


> Anyone who uses sequences should be familiar with this sort of notation:
>
> for i in spam[1:10]:
>   pass
>
> So should have at least some understanding that 1:10 can mean "1 to 10,
not including 10".
>
> I don't see it being any more cryptic than {'a': 1, 'b': 2} meaning
dict(a=1, b=2).  On the contrary, I think the dict literal syntax is even
more cryptic, since it has no similarity to any other syntax.  The syntax I
propose here is at least similar (although not identical) to slicing.
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/b689ba25/attachment-0001.html>

From skip.montanaro at gmail.com  Mon Feb  2 14:45:51 2015
From: skip.montanaro at gmail.com (Skip Montanaro)
Date: Mon, 2 Feb 2015 07:45:51 -0600
Subject: [Python-ideas] More useful slices
In-Reply-To: <20150202123713.GS2498@ando.pearwood.info>
References: <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <CADiSq7e37xUqrzJObk9O6gbzCdu+MnKcGn+mYuxmCARbGbuQUA@mail.gmail.com>
 <CAFpSVpLXnX3AiKBgr0kfbMzKRuNqqgm043zdw1JnRJAtGStSFg@mail.gmail.com>
 <CAPTjJmqbKD5L=DfmyhDDnS4ttEoZLJ+Onr9fHG7wcpu6yUqdAQ@mail.gmail.com>
 <CAFpSVpJ-+T_3mFFXh1RLXkvrLH2w=cKU8sU3nhp7yHtor+rbTA@mail.gmail.com>
 <54CF6938.30006@btinternet.com>
 <20150202123713.GS2498@ando.pearwood.info>
Message-ID: <CANc-5Uwo3E6wreu9CP41e=dEeBicsGBQQ6h9iXEKhFJuKOKKTg@mail.gmail.com>

> I'm partial to
>
>     for i in 1...10: pass
>
> myself.

You have to handle step size != 1 in the loop.

I'm +0 on the whole idea of repurposing/creating syntax for range(). On the
one hand, I think the syntactic sugar can be kind of handy, and runtime
optimization of loop constructs is likely to be a bit better. OTOH, I doubt
the presence of a call to range() vs. a fixed chunk of syntax is a big
impediment to the PyPy folks and others interested in optimizing Python
performance.

Skip
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/c1c40fe9/attachment.html>

From toddrjen at gmail.com  Mon Feb  2 15:00:39 2015
From: toddrjen at gmail.com (Todd)
Date: Mon, 2 Feb 2015 15:00:39 +0100
Subject: [Python-ideas] More useful slices
In-Reply-To: <CACac1F8DcE+8+wDd=x9mWkTX9j6Ox7t_pdh3p0Bb6va3gMBYqA@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <54CE9FDB.2010606@canterbury.ac.nz>
 <CAFpSVpK6mTh9rqNxCZajGnmQhTOHFZcb7GDm5XAsYRw+tgpp4w@mail.gmail.com>
 <CACac1F8DcE+8+wDd=x9mWkTX9j6Ox7t_pdh3p0Bb6va3gMBYqA@mail.gmail.com>
Message-ID: <CAFpSVpLTbq9kweRyyUSvirvcVWUABWS5CN3vt3=4vejK3kvQeQ@mail.gmail.com>

On Mon, Feb 2, 2015 at 1:48 PM, Paul Moore <p.f.moore at gmail.com> wrote:

> On 2 February 2015 at 10:22, Todd <toddrjen at gmail.com> wrote:
> > Let's just call this a range literal.  It has some features similar to
> > slices, but not all.
>
> So what would d[1:10:2] mean? d[Slice(1,10,2)] or d[range(1,10,2)]?
>
> At the moment 1:10:2 is a slice literal, and you're proposing to
> repurpose it as a range literal? What have I missed? Because that's
> never going to satisfy backward compatibility requirements.
>
>
No, I am proposing that (x:y:x) is syntactic sugar for range(x, y, z).
Note the parentheses.  Currently wrapping slice notation in a parentheses
is not valid syntax, so there is no backwards-compatibility issues.  Bare
slice notation, without the parentheses or in an index, will remain invalid
syntax.

I am absolutely *not* proposing that anything change with existing slices.
d[1:10:2] would not change at all.  d[(1:10:2)] would become valid syntax,
but it is not the use-case that prompted it and I doubt anyone would
actually want to do that.

I am more interested in something like:

for i in (1:10:2):
   pass

I am also *not* proposed the following syntax, since it would be ambiguous
in many cases:

for i in 1:10:2:
   pass
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/75b2de8d/attachment.html>

From rob.cliffe at btinternet.com  Mon Feb  2 15:44:27 2015
From: rob.cliffe at btinternet.com (Rob Cliffe)
Date: Mon, 02 Feb 2015 14:44:27 +0000
Subject: [Python-ideas] More useful slices
In-Reply-To: <CAH_XWH0d-eBf6TDdGSkgK=jU-v3mq_BiKGwY01At5k6ye0-VAA@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>	<CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>	<CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>	<CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>	<CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>	<CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>	<CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>	<CADiSq7e37xUqrzJObk9O6gbzCdu+MnKcGn+mYuxmCARbGbuQUA@mail.gmail.com>	<CAFpSVpLXnX3AiKBgr0kfbMzKRuNqqgm043zdw1JnRJAtGStSFg@mail.gmail.com>	<CAPTjJmqbKD5L=DfmyhDDnS4ttEoZLJ+Onr9fHG7wcpu6yUqdAQ@mail.gmail.com>	<CAFpSVpJ-+T_3mFFXh1RLXkvrLH2w=cKU8sU3nhp7yHtor+rbTA@mail.gmail.com>	<54CF6938.30006@btinternet.com>	<CAFpSVpKN97_WjDJaQWvqS3WVj_6Cx_1tWFCQ56+AtP_C8RLiAQ@mail.gmail.com>	<54CF710A.6000601@btinternet.com>
 <CAH_XWH0d-eBf6TDdGSkgK=jU-v3mq_BiKGwY01At5k6ye0-VAA@mail.gmail.com>
Message-ID: <54CF8D4B.3010800@btinternet.com>


On 02/02/2015 12:48, Jo?o Santos wrote:
> L = ['foo', 'bar', 'baz'] is syntactic sugar, you can write L = 
> list('foo', 'bar', 'baz')
No you can't (at least not in Python 2.7.3).  (A good advertisement for 
testing your code before publishing it. )  It has to be L = list(('foo', 
'bar', 'baz')) which is using a tuple literal.


From p.f.moore at gmail.com  Mon Feb  2 15:53:36 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 2 Feb 2015 14:53:36 +0000
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
Message-ID: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>

There's a lot of flexibility in the new layered IO system, but one
thing it doesn't allow is any means of adding "hooks" to the data
flow, or manipulation of an already-created io object.

For example, when a subprocess.Popen object uses a pipe for the
child's stdout, the data is captured instead of writing it to the
console. Sometimes it would be nice to capture it, but still write to
the console. That would be easy to do if we could wrap the underlying
RawIOBase object and intercept read() calls[1]. A subclass of
RawIOBase can do this trivially, but there's no way of replacing the
class on an existing stream.

The obvious approach would be to reassign the "raw" attribute of the
BufferedIOBase object, but that's readonly. Would it be possible to
make it read/write? Or provide another way of replacing the raw IO
object underlying an io object?

I'm sure there are buffer integrity issues to work out, but are there
any more fundamental problems with this approach?

Paul

[1] Actually, it's *not* that easy, because subprocess.Popen objects
are insanely hard to subclass - there are no hooks into the pipe
creation process, and no way to intercept the object before the
subprocess gets run (that happens in the __init__ method). But that's
a separate issue, and also the subject of a different thread here.

From mistersheik at gmail.com  Mon Feb  2 16:01:34 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Mon, 2 Feb 2015 07:01:34 -0800 (PST)
Subject: [Python-ideas] More useful slices
In-Reply-To: <20150201172712.GO2498@ando.pearwood.info>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
Message-ID: <f828d7ea-3d27-4df8-b237-4fcc630d4f6c@googlegroups.com>

But it is meaningful to have a range with an unspecified stop.  That's 
itertools.count.

On Sunday, February 1, 2015 at 12:27:53 PM UTC-5, Steven D'Aprano wrote:
>
> On Sun, Feb 01, 2015 at 04:13:32PM +0100, Todd wrote: 
> > Although slices and ranges are used for different things and implemented 
> > differently, conceptually they are similar: they define an integer 
> sequence 
> > with a start, stop, and step size. 
>
> I'm afraid that you are mistaken. Slices do not necessarily define 
> integer sequences. They are completely arbitrary, and it is up to the 
> object being sliced to interpret what is or isn't valid and what the 
> slice means. 
>
> py> class K(object): 
> ...     def __getitem__(self, i): 
> ...             print(i) 
> ... 
> py> K()["abc":(2,3,4):{}] 
> slice('abc', (2, 3, 4), {}) 
>
>
> In addition, the "stop" argument to a slice may be unspecified until 
> it it applied to a specific sequence, while range always requires 
> stop to be defined. 
>
> py> s = slice(1, None, 2) 
> py> "abcde"[s] 
> 'bd' 
> py> "abcdefghi"[s] 
> 'bdfh' 
>
> It isn't meaningful to define a range with an unspecified stop, but it 
> is meaningful to do so for slices. 
>
>
> In other words, the similarity between slices and ranges is not as close 
> as you think. 
>
>
> -- 
> Steven 
> _______________________________________________ 
> Python-ideas mailing list 
> Python... at python.org <javascript:> 
> https://mail.python.org/mailman/listinfo/python-ideas 
> Code of Conduct: http://python.org/psf/codeofconduct/ 
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/c3ff55ec/attachment-0001.html>

From guettliml at thomas-guettler.de  Mon Feb  2 16:27:19 2015
From: guettliml at thomas-guettler.de (=?windows-1252?Q?Thomas_G=FCttler?=)
Date: Mon, 02 Feb 2015 16:27:19 +0100
Subject: [Python-ideas] Infinite in Python (not just in datetime)
In-Reply-To: <20150202102948.GR2498@ando.pearwood.info>
References: <54CF32A0.50009@thomas-guettler.de>
 <20150202102948.GR2498@ando.pearwood.info>
Message-ID: <54CF9757.2010301@thomas-guettler.de>


Am 02.02.2015 um 11:29 schrieb Steven D'Aprano:
> On Mon, Feb 02, 2015 at 09:17:36AM +0100, Thomas G?ttler wrote:
>> Thank you very much, that there are some developers who support the
>> idea of infinite in the datetime module.
>>
>>
>> Next idea: If Python would support infinite in the core, implementing
>> infinite in datetime might be much easier.
>
> I don't understand what that means. What does it mean for Python to
> support infinite in the core?
>
> Infinite what?

Integer

A infinite timedelta could be created like this:

datetime.timedelta(days=infinite)

   Thomas G?ttler

From greg at krypto.org  Mon Feb  2 17:00:41 2015
From: greg at krypto.org (Gregory P. Smith)
Date: Mon, 02 Feb 2015 16:00:41 +0000
Subject: [Python-ideas] Adding a subprocess.CompletedProcess class
References: <CAOvn4qgTKVTbcY+BN_Z7-v8bMRprn=2FSqcnMuw9j+NT-cegPA@mail.gmail.com>
 <CAGE7PNKQ0QpZ-3hab4xJppknhCOOCJXuRWadsKGeGFz-BbJg_A@mail.gmail.com>
 <CADiSq7dAYUudHbDzRuVxmL6SsT4i8_50jvGNPf0G2Jet7MS8Dg@mail.gmail.com>
 <CAOvn4qhH2Et49NgMMED3DQ-9GuX1k8cELGEXyegh64ezMHoqfw@mail.gmail.com>
 <20150128171318.22a829eb@anarchist.wooz.org>
 <CADiSq7etZdNe9mgjpfXO8b=r5981-vUxP3t=Kn+62n-SwW+b1w@mail.gmail.com>
 <CAOvn4qiH_NScSjg0W+FXSO21jufu3upHLFcaTeqxiR85-wakxQ@mail.gmail.com>
 <CACac1F913pukSDKrWTP46M2XEcfU2j90+N_pBoagHxGS3kHSRA@mail.gmail.com>
 <CACfEFw-9CuNcfG+uvaUKk+qV6wt38B7MeQfW98o_aTYVzspg4Q@mail.gmail.com>
 <CACac1F_ftnk+D7q3V3pdhHkZS2BLHrA=eYMZd6QP44QNyiSd0A@mail.gmail.com>
 <CACfEFw--Wd79_16AVJPk6MVRHuSOy117duw_zXO=GXXVFxFPYg@mail.gmail.com>
 <CACac1F9w-mXk6+sk9PensadBgtWefQajkqtrP2G54NyGEXD+=Q@mail.gmail.com>
Message-ID: <CAGE7PNKLYstY2PvPpYKa1RTnNwjcs2v8uFYjX-72e8YxVQyGdQ@mail.gmail.com>

For the capture and forward of output you probably want to base a solution
on https://docs.python.org/3/library/asyncio-subprocess.html these days
rather than rolling your own poll + read + echo loop.

On Mon, Feb 2, 2015, 12:08 AM Paul Moore <p.f.moore at gmail.com> wrote:

> On 2 February 2015 at 01:50, Wes Turner <wes.turner at gmail.com> wrote:
> >> I'm not sure how that's relevant. Sure, if the child buffers output
> >> there's a delay in seeing it, but that's no more an issue than it
> >> would be for display on the console.
> >
> > Ah. The link on that page that I was looking for was probably "Using
> > Redirection" #using-redirection and/or " Capturing stdout and stderr from
> > commands" #capturing-stdout-and-stderr-from-commands
>
> Yes, but again the issue here isn't capturing stdout/stderr, but
> capturing them *while having them still displayed on the terminal*.
> Anyway, this is starting to hijack the thread, so I'll go and have a
> more detailed look at sarge to see if it does provide what I need
> somehow.
>
> Thanks,
> Paul
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/dfb00045/attachment.html>

From p.f.moore at gmail.com  Mon Feb  2 17:05:41 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 2 Feb 2015 16:05:41 +0000
Subject: [Python-ideas] Adding a subprocess.CompletedProcess class
In-Reply-To: <CAGE7PNKLYstY2PvPpYKa1RTnNwjcs2v8uFYjX-72e8YxVQyGdQ@mail.gmail.com>
References: <CAOvn4qgTKVTbcY+BN_Z7-v8bMRprn=2FSqcnMuw9j+NT-cegPA@mail.gmail.com>
 <CAGE7PNKQ0QpZ-3hab4xJppknhCOOCJXuRWadsKGeGFz-BbJg_A@mail.gmail.com>
 <CADiSq7dAYUudHbDzRuVxmL6SsT4i8_50jvGNPf0G2Jet7MS8Dg@mail.gmail.com>
 <CAOvn4qhH2Et49NgMMED3DQ-9GuX1k8cELGEXyegh64ezMHoqfw@mail.gmail.com>
 <20150128171318.22a829eb@anarchist.wooz.org>
 <CADiSq7etZdNe9mgjpfXO8b=r5981-vUxP3t=Kn+62n-SwW+b1w@mail.gmail.com>
 <CAOvn4qiH_NScSjg0W+FXSO21jufu3upHLFcaTeqxiR85-wakxQ@mail.gmail.com>
 <CACac1F913pukSDKrWTP46M2XEcfU2j90+N_pBoagHxGS3kHSRA@mail.gmail.com>
 <CACfEFw-9CuNcfG+uvaUKk+qV6wt38B7MeQfW98o_aTYVzspg4Q@mail.gmail.com>
 <CACac1F_ftnk+D7q3V3pdhHkZS2BLHrA=eYMZd6QP44QNyiSd0A@mail.gmail.com>
 <CACfEFw--Wd79_16AVJPk6MVRHuSOy117duw_zXO=GXXVFxFPYg@mail.gmail.com>
 <CACac1F9w-mXk6+sk9PensadBgtWefQajkqtrP2G54NyGEXD+=Q@mail.gmail.com>
 <CAGE7PNKLYstY2PvPpYKa1RTnNwjcs2v8uFYjX-72e8YxVQyGdQ@mail.gmail.com>
Message-ID: <CACac1F_drs+MuqetBeB4Uf-wNap1KvqXNxQd2D702kN_zbu1hA@mail.gmail.com>

On 2 February 2015 at 16:00, Gregory P. Smith <greg at krypto.org> wrote:
> For the capture and forward of output you probably want to base a solution
> on https://docs.python.org/3/library/asyncio-subprocess.html these days
> rather than rolling your own poll + read + echo loop.

For integrating with existing code, can asyncio interoperate with
subprocess.Popen objects? I couldn't see how from my reading of the
docs. It feels to me from the docs as if asyncio is an architecture
choice - if you're not using asncio throughout your program, it's of
no help to you. I may well have misunderstood something, though, as I
thought one of the aims of asyncio was that it could work in
conjunction with more traditional code.

Paul

PS Are there any good asyncio tutorials? Particularly ones *not* about
implementing network protocols. It feels like you need a degree to
understand the docs :-(

From chris.barker at noaa.gov  Mon Feb  2 17:11:26 2015
From: chris.barker at noaa.gov (Chris Barker - NOAA Federal)
Date: Mon, 2 Feb 2015 08:11:26 -0800
Subject: [Python-ideas] More useful slices
In-Reply-To: <f828d7ea-3d27-4df8-b237-4fcc630d4f6c@googlegroups.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
 <f828d7ea-3d27-4df8-b237-4fcc630d4f6c@googlegroups.com>
Message-ID: <303232672200862548@unknownmsgid>

Please go read PEP 204, as Nick suggested. It would be nice if there was
more Kongo as to why it was rejected, but it was, indeed, rejected.

I also recall a lot of drawn out threads about more compact syntax for a
for loop over integers, such as making the integer object iterable:

for I in 10:
    pass

Which were also all rejected.

Unless someone comes up with a compelling argument that something has
changed, it seems this conversation has been had already.

-Chris



On Feb 2, 2015, at 7:02 AM, Neil Girdhar <mistersheik at gmail.com> wrote:

But it is meaningful to have a range with an unspecified stop.  That's
itertools.count.

On Sunday, February 1, 2015 at 12:27:53 PM UTC-5, Steven D'Aprano wrote:
>
> On Sun, Feb 01, 2015 at 04:13:32PM +0100, Todd wrote:
> > Although slices and ranges are used for different things and implemented
> > differently, conceptually they are similar: they define an integer
> sequence
> > with a start, stop, and step size.
>
> I'm afraid that you are mistaken. Slices do not necessarily define
> integer sequences. They are completely arbitrary, and it is up to the
> object being sliced to interpret what is or isn't valid and what the
> slice means.
>
> py> class K(object):
> ...     def __getitem__(self, i):
> ...             print(i)
> ...
> py> K()["abc":(2,3,4):{}]
> slice('abc', (2, 3, 4), {})
>
>
> In addition, the "stop" argument to a slice may be unspecified until
> it it applied to a specific sequence, while range always requires
> stop to be defined.
>
> py> s = slice(1, None, 2)
> py> "abcde"[s]
> 'bd'
> py> "abcdefghi"[s]
> 'bdfh'
>
> It isn't meaningful to define a range with an unspecified stop, but it
> is meaningful to do so for slices.
>
>
> In other words, the similarity between slices and ranges is not as close
> as you think.
>
>
> --
> Steven
> _______________________________________________
> Python-ideas mailing list
> Python... at python.org <javascript:>
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
_______________________________________________
Python-ideas mailing list
Python-ideas at python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/5a6afbfc/attachment-0001.html>

From guido at python.org  Mon Feb  2 17:20:08 2015
From: guido at python.org (Guido van Rossum)
Date: Mon, 2 Feb 2015 08:20:08 -0800
Subject: [Python-ideas] More useful slices
In-Reply-To: <303232672200862548@unknownmsgid>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
 <f828d7ea-3d27-4df8-b237-4fcc630d4f6c@googlegroups.com>
 <303232672200862548@unknownmsgid>
Message-ID: <CAP7+vJ+mh-YpftqLeVfCAoyDs-BOVNZAzn1hHJNbH4iHS-qTww@mail.gmail.com>

I don't want to have to have a log drawn-out argument over why I don't like
this proposal or PEP 204. For every reason I can bring up there will surely
be someone who strongly disagrees with that reason. But I'm still going to
reject it -- we already have range() and slice(), with their subtle but
irreconcilable differences, and neither of them is important enough to
warrant any more support built into the language.

On Mon, Feb 2, 2015 at 8:11 AM, Chris Barker - NOAA Federal <
chris.barker at noaa.gov> wrote:

> Please go read PEP 204, as Nick suggested. It would be nice if there was
> more Kongo as to why it was rejected, but it was, indeed, rejected.
>
> I also recall a lot of drawn out threads about more compact syntax for a
> for loop over integers, such as making the integer object iterable:
>
> for I in 10:
>     pass
>
> Which were also all rejected.
>
> Unless someone comes up with a compelling argument that something has
> changed, it seems this conversation has been had already.
>
> -Chris
>
>
>
> On Feb 2, 2015, at 7:02 AM, Neil Girdhar <mistersheik at gmail.com> wrote:
>
> But it is meaningful to have a range with an unspecified stop.  That's
> itertools.count.
>
> On Sunday, February 1, 2015 at 12:27:53 PM UTC-5, Steven D'Aprano wrote:
>>
>> On Sun, Feb 01, 2015 at 04:13:32PM +0100, Todd wrote:
>> > Although slices and ranges are used for different things and
>> implemented
>> > differently, conceptually they are similar: they define an integer
>> sequence
>> > with a start, stop, and step size.
>>
>> I'm afraid that you are mistaken. Slices do not necessarily define
>> integer sequences. They are completely arbitrary, and it is up to the
>> object being sliced to interpret what is or isn't valid and what the
>> slice means.
>>
>> py> class K(object):
>> ...     def __getitem__(self, i):
>> ...             print(i)
>> ...
>> py> K()["abc":(2,3,4):{}]
>> slice('abc', (2, 3, 4), {})
>>
>>
>> In addition, the "stop" argument to a slice may be unspecified until
>> it it applied to a specific sequence, while range always requires
>> stop to be defined.
>>
>> py> s = slice(1, None, 2)
>> py> "abcde"[s]
>> 'bd'
>> py> "abcdefghi"[s]
>> 'bdfh'
>>
>> It isn't meaningful to define a range with an unspecified stop, but it
>> is meaningful to do so for slices.
>>
>>
>> In other words, the similarity between slices and ranges is not as close
>> as you think.
>>
>>
>> --
>> Steven
>> _______________________________________________
>> Python-ideas mailing list
>> Python... at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/2fe5fb83/attachment.html>

From guido at python.org  Mon Feb  2 17:31:53 2015
From: guido at python.org (Guido van Rossum)
Date: Mon, 2 Feb 2015 08:31:53 -0800
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
Message-ID: <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>

I'm all for flexible I/O processing, but I worry that the idea brought up
here feels a little half-baked. First of all, it seems to mention two
separate cases of subclassing (both io.RawIOBase and subprocess.Popen).
These days, subclassing(*) is often an anti-pattern: unless done with
considerable foresight, every detail of the base class implementation
essentially becomes part of the interface that the subclass relies upon,
and now the base class becomes too constrained in its evolution. In my
experience, a well-done API is usually much easier to evolve than even a
very-well-done base class.

The other thing is that I can't actually imagine the details of your
proposal. Is the idea that you subclass RawIOBase to implement "tee"
behavior? Why can't you do that at the receiving end? Is perhaps the
proposal to assign the base object a work-around for a interface design in
the Popen class? (I'm sure that class is far from perfect -- but it's also
super constrained by the need to support Windows process creation.)

_____
(*) I'm talking about subclassing as an API mechanism. A set of
interrelated classes can work well if they are all part of the same
package, so their implementations can evolve together as needed. But when
proposing APIs which serve as important abstractions, it's much better if
new abstractions are built by combining and wrapping objects rather than by
subclassing.

On Mon, Feb 2, 2015 at 6:53 AM, Paul Moore <p.f.moore at gmail.com> wrote:

> There's a lot of flexibility in the new layered IO system, but one
> thing it doesn't allow is any means of adding "hooks" to the data
> flow, or manipulation of an already-created io object.
>
> For example, when a subprocess.Popen object uses a pipe for the
> child's stdout, the data is captured instead of writing it to the
> console. Sometimes it would be nice to capture it, but still write to
> the console. That would be easy to do if we could wrap the underlying
> RawIOBase object and intercept read() calls[1]. A subclass of
> RawIOBase can do this trivially, but there's no way of replacing the
> class on an existing stream.
>
> The obvious approach would be to reassign the "raw" attribute of the
> BufferedIOBase object, but that's readonly. Would it be possible to
> make it read/write? Or provide another way of replacing the raw IO
> object underlying an io object?
>
> I'm sure there are buffer integrity issues to work out, but are there
> any more fundamental problems with this approach?
>
> Paul
>
> [1] Actually, it's *not* that easy, because subprocess.Popen objects
> are insanely hard to subclass - there are no hooks into the pipe
> creation process, and no way to intercept the object before the
> subprocess gets run (that happens in the __init__ method). But that's
> a separate issue, and also the subject of a different thread here.
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/c43555e3/attachment.html>

From toddrjen at gmail.com  Mon Feb  2 17:33:28 2015
From: toddrjen at gmail.com (Todd)
Date: Mon, 2 Feb 2015 17:33:28 +0100
Subject: [Python-ideas] More useful slices
In-Reply-To: <303232672200862548@unknownmsgid>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
 <f828d7ea-3d27-4df8-b237-4fcc630d4f6c@googlegroups.com>
 <303232672200862548@unknownmsgid>
Message-ID: <CAFpSVpLWYc_oX5+i8=CR7ofC55qfQfTkCiUTTrMTJunudN=4yg@mail.gmail.com>

On Mon, Feb 2, 2015 at 5:11 PM, Chris Barker - NOAA Federal <
chris.barker at noaa.gov> wrote:

> Please go read PEP 204, as Nick suggested. It would be nice if there was
> more Kongo as to why it was rejected, but it was, indeed, rejected.
>
> I also recall a lot of drawn out threads about more compact syntax for a
> for loop over integers, such as making the integer object iterable:
>
> for I in 10:
>     pass
>
> Which were also all rejected.
>
> Unless someone comes up with a compelling argument that something has
> changed, it seems this conversation has been had already.
>
>

My syntax is not identical to the syntax proposed there, and in was chosen,
in part, to avoid some of the issues that were responsible for its
rejection.

The reasons given for rejection were "open issues" and "some confusion
between ranges and slice syntax".

For the latter, part of my reason for using parentheses instead of brackets
is to avoid potential confusion.  So without knowing what specific
confusion was being referred to I can't say whether this complaint is
relevant or not.

For the open issues, all of the issues are either no longer relevant in
python 3 or are not applicable to my proposed syntax.  The first is fixed
by the python 3 range type, the fourth is fixed by the elimination of the
long type in python 3, and the second, third, and fifth are not relevant to
my proposed syntax (in fact I chose parentheses instead of brackets partly
to avoid ambiguities and complications like these).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/9448507d/attachment-0001.html>

From chris.barker at noaa.gov  Mon Feb  2 17:35:11 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Mon, 2 Feb 2015 08:35:11 -0800
Subject: [Python-ideas] More useful slices
In-Reply-To: <303232672200862548@unknownmsgid>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
 <f828d7ea-3d27-4df8-b237-4fcc630d4f6c@googlegroups.com>
 <303232672200862548@unknownmsgid>
Message-ID: <CALGmxELVqD7Usqxh4-76FTu6JF3RbLyLrox9AfhMDwENq1NOug@mail.gmail.com>

On Mon, Feb 2, 2015 at 8:11 AM, Chris Barker - NOAA Federal <
chris.barker at noaa.gov> wrote:

> It would be nice if there was more Kongo as to why it was rejected,
>

Wow -- weird auto-correct! That was supposed to be "info".

That's what I get for trying to write these notes on a phone on the bus...

Sorry,

 -Chris





> but it was, indeed, rejected.
>
> I also recall a lot of drawn out threads about more compact syntax for a
> for loop over integers, such as making the integer object iterable:
>
> for I in 10:
>     pass
>
> Which were also all rejected.
>
> Unless someone comes up with a compelling argument that something has
> changed, it seems this conversation has been had already.
>
> -Chris
>
>
>
> On Feb 2, 2015, at 7:02 AM, Neil Girdhar <mistersheik at gmail.com> wrote:
>
> But it is meaningful to have a range with an unspecified stop.  That's
> itertools.count.
>
> On Sunday, February 1, 2015 at 12:27:53 PM UTC-5, Steven D'Aprano wrote:
>>
>> On Sun, Feb 01, 2015 at 04:13:32PM +0100, Todd wrote:
>> > Although slices and ranges are used for different things and
>> implemented
>> > differently, conceptually they are similar: they define an integer
>> sequence
>> > with a start, stop, and step size.
>>
>> I'm afraid that you are mistaken. Slices do not necessarily define
>> integer sequences. They are completely arbitrary, and it is up to the
>> object being sliced to interpret what is or isn't valid and what the
>> slice means.
>>
>> py> class K(object):
>> ...     def __getitem__(self, i):
>> ...             print(i)
>> ...
>> py> K()["abc":(2,3,4):{}]
>> slice('abc', (2, 3, 4), {})
>>
>>
>> In addition, the "stop" argument to a slice may be unspecified until
>> it it applied to a specific sequence, while range always requires
>> stop to be defined.
>>
>> py> s = slice(1, None, 2)
>> py> "abcde"[s]
>> 'bd'
>> py> "abcdefghi"[s]
>> 'bdfh'
>>
>> It isn't meaningful to define a range with an unspecified stop, but it
>> is meaningful to do so for slices.
>>
>>
>> In other words, the similarity between slices and ranges is not as close
>> as you think.
>>
>>
>> --
>> Steven
>> _______________________________________________
>> Python-ideas mailing list
>> Python... at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
>


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/24635a0f/attachment.html>

From chris.barker at noaa.gov  Mon Feb  2 17:45:01 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Mon, 2 Feb 2015 08:45:01 -0800
Subject: [Python-ideas] Infinite in Python (not just in datetime)
In-Reply-To: <54CF9757.2010301@thomas-guettler.de>
References: <54CF32A0.50009@thomas-guettler.de>
 <20150202102948.GR2498@ando.pearwood.info>
 <54CF9757.2010301@thomas-guettler.de>
Message-ID: <CALGmxELhvV7gXmqaN4VyvJyoHDw0crv-x6OvQh3t2+1ODYFbiQ@mail.gmail.com>

On Mon, Feb 2, 2015 at 7:27 AM, Thomas G?ttler <guettliml at thomas-guettler.de
> wrote:

> Infinite what?
>>
>
> Integer
>

Well, inf is supported in floats because it is supported in the native
machine double. I suspect that adding inf and -inf (and NaN) to integers
would end up being a pretty heavy-weight addition.

On the other hand, my quicky off the cuff implementation of InfDateTime is,
in fact, a  universal infinity -- i.e. it compares as greater than any
other object, not just a datetime object. I did that more because it was
easy than anything else, but perhaps it could be generally useful to have a
Inf and -Inf object, kind of like None.

Or it would just create a huge set of confusing corner cases :-)

I would much rather see a infinite datetime object that a big discussion
about the implications of a generic infinite object -- it is focused, has
proven used cases, and wouldn't impact anything beyond datetime.

-Chris




-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/d0fd6274/attachment.html>

From mistersheik at gmail.com  Mon Feb  2 16:29:17 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Mon, 2 Feb 2015 07:29:17 -0800 (PST)
Subject: [Python-ideas] More useful slices
In-Reply-To: <f828d7ea-3d27-4df8-b237-4fcc630d4f6c@googlegroups.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
 <f828d7ea-3d27-4df8-b237-4fcc630d4f6c@googlegroups.com>
Message-ID: <5e6328ea-3897-4536-8e84-54185d3a7287@googlegroups.com>

This proposal is definitely *possible*, but is the only argument in its 
favor saving 5 characters?  You already have the shorthand exactly when you 
most need it (indexing).

On Monday, February 2, 2015 at 10:01:34 AM UTC-5, Neil Girdhar wrote:
>
> But it is meaningful to have a range with an unspecified stop.  That's 
> itertools.count.
>
> On Sunday, February 1, 2015 at 12:27:53 PM UTC-5, Steven D'Aprano wrote:
>>
>> On Sun, Feb 01, 2015 at 04:13:32PM +0100, Todd wrote: 
>> > Although slices and ranges are used for different things and 
>> implemented 
>> > differently, conceptually they are similar: they define an integer 
>> sequence 
>> > with a start, stop, and step size. 
>>
>> I'm afraid that you are mistaken. Slices do not necessarily define 
>> integer sequences. They are completely arbitrary, and it is up to the 
>> object being sliced to interpret what is or isn't valid and what the 
>> slice means. 
>>
>> py> class K(object): 
>> ...     def __getitem__(self, i): 
>> ...             print(i) 
>> ... 
>> py> K()["abc":(2,3,4):{}] 
>> slice('abc', (2, 3, 4), {}) 
>>
>>
>> In addition, the "stop" argument to a slice may be unspecified until 
>> it it applied to a specific sequence, while range always requires 
>> stop to be defined. 
>>
>> py> s = slice(1, None, 2) 
>> py> "abcde"[s] 
>> 'bd' 
>> py> "abcdefghi"[s] 
>> 'bdfh' 
>>
>> It isn't meaningful to define a range with an unspecified stop, but it 
>> is meaningful to do so for slices. 
>>
>>
>> In other words, the similarity between slices and ranges is not as close 
>> as you think. 
>>
>>
>> -- 
>> Steven 
>> _______________________________________________ 
>> Python-ideas mailing list 
>> Python... at python.org 
>> https://mail.python.org/mailman/listinfo/python-ideas 
>> Code of Conduct: http://python.org/psf/codeofconduct/ 
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/cad84d57/attachment-0001.html>

From p.f.moore at gmail.com  Mon Feb  2 18:10:12 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 2 Feb 2015 17:10:12 +0000
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
Message-ID: <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>

On 2 February 2015 at 16:31, Guido van Rossum <guido at python.org> wrote:
> I'm all for flexible I/O processing, but I worry that the idea brought up
> here feels a little half-baked. First of all, it seems to mention two
> separate cases of subclassing (both io.RawIOBase and subprocess.Popen).
> These days, subclassing(*) is often an anti-pattern: unless done with
> considerable foresight, every detail of the base class implementation
> essentially becomes part of the interface that the subclass relies upon, and
> now the base class becomes too constrained in its evolution. In my
> experience, a well-done API is usually much easier to evolve than even a
> very-well-done base class.
>
> The other thing is that I can't actually imagine the details of your
> proposal. Is the idea that you subclass RawIOBase to implement "tee"
> behavior? Why can't you do that at the receiving end? Is perhaps the
> proposal to assign the base object a work-around for a interface design in
> the Popen class? (I'm sure that class is far from perfect -- but it's also
> super constrained by the need to support Windows process creation.)

The idea is certainly a little half-baked :-( And you're absolutely
right that it's strongly linked to a fight to work around limitations
of subprocess.Popen. The suggestion originally came out of a couple of
things I've been working on, one of which was trying to make a Popen
call that captured the stdout/stderr streams while still displaying
them (as you say, a "tee" type of mechanism).

It's certainly possible to do the "tee" at the receiving end, but
(because of the aforementioned Popen limitations) doing so requires
ignoring the convenience of communicate() and writing your own capture
code. That's not *too* hard using threads, but Popen avoids threads on
Unix, using a select loop instead, and I'm not clear why, and whether
my solution will break in the situations the Popen code is covering
via the select loop. Also, getting corner cases in the capture code
right (around encodings in particular) is something I'd prefer to
leave to subprocess :-) The original issue was for a PR for a project
that works on a lot of platforms I don't have access to, so I may well
have been worrying too much about "not breaking stuff" :-)

This proposal basically came from a feeling that if only I could "see"
the data as it flows through the buffers of an existing io stream, I
wouldn't have all these problems. Originally I was going to suggest a
"buffer filled" type of callback. With such a hook, though, I was
thinking I could do

p = Popen(..., stdout=PIPE, stderr=PIPE)
# Not sure if these need to be at the Raw IO level or the buffered IO
level. Should be called every time an OS read happens.
p.stdout.buffer.add_buffer_watcher(lambda buf:
os.write(sys.stdout.fileno(), buf))
p.stderr.buffer.add_buffer_watcher(lambda buf:
os.write(sys.stderr.fileno(), buf))

I guess that's a cleaner proposal, although I pretty much assumed that
the overhead of such a hook being checked for on every buffer read
would be unacceptable. So I came up with a clumsier approach based on
trying to make it so you only paid the cost if you used the feature.
Overall, that was probably a mistake :-(

I hope it's clearer now.

Paul

From guido at python.org  Mon Feb  2 18:47:58 2015
From: guido at python.org (Guido van Rossum)
Date: Mon, 2 Feb 2015 09:47:58 -0800
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
Message-ID: <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>

Perhaps you would be better off using the subprocess machinery in asyncio?
It uses the subprocess module to manage the subprocess itself (both for
Windows and for Unix-ish systems) but uses the async I/O machinery from the
asyncio module (which is itself based on select or its better brethren).

On Mon, Feb 2, 2015 at 9:10 AM, Paul Moore <p.f.moore at gmail.com> wrote:

> On 2 February 2015 at 16:31, Guido van Rossum <guido at python.org> wrote:
> > I'm all for flexible I/O processing, but I worry that the idea brought up
> > here feels a little half-baked. First of all, it seems to mention two
> > separate cases of subclassing (both io.RawIOBase and subprocess.Popen).
> > These days, subclassing(*) is often an anti-pattern: unless done with
> > considerable foresight, every detail of the base class implementation
> > essentially becomes part of the interface that the subclass relies upon,
> and
> > now the base class becomes too constrained in its evolution. In my
> > experience, a well-done API is usually much easier to evolve than even a
> > very-well-done base class.
> >
> > The other thing is that I can't actually imagine the details of your
> > proposal. Is the idea that you subclass RawIOBase to implement "tee"
> > behavior? Why can't you do that at the receiving end? Is perhaps the
> > proposal to assign the base object a work-around for a interface design
> in
> > the Popen class? (I'm sure that class is far from perfect -- but it's
> also
> > super constrained by the need to support Windows process creation.)
>
> The idea is certainly a little half-baked :-( And you're absolutely
> right that it's strongly linked to a fight to work around limitations
> of subprocess.Popen. The suggestion originally came out of a couple of
> things I've been working on, one of which was trying to make a Popen
> call that captured the stdout/stderr streams while still displaying
> them (as you say, a "tee" type of mechanism).
>
> It's certainly possible to do the "tee" at the receiving end, but
> (because of the aforementioned Popen limitations) doing so requires
> ignoring the convenience of communicate() and writing your own capture
> code. That's not *too* hard using threads, but Popen avoids threads on
> Unix, using a select loop instead, and I'm not clear why, and whether
> my solution will break in the situations the Popen code is covering
> via the select loop. Also, getting corner cases in the capture code
> right (around encodings in particular) is something I'd prefer to
> leave to subprocess :-) The original issue was for a PR for a project
> that works on a lot of platforms I don't have access to, so I may well
> have been worrying too much about "not breaking stuff" :-)
>
> This proposal basically came from a feeling that if only I could "see"
> the data as it flows through the buffers of an existing io stream, I
> wouldn't have all these problems. Originally I was going to suggest a
> "buffer filled" type of callback. With such a hook, though, I was
> thinking I could do
>
> p = Popen(..., stdout=PIPE, stderr=PIPE)
> # Not sure if these need to be at the Raw IO level or the buffered IO
> level. Should be called every time an OS read happens.
> p.stdout.buffer.add_buffer_watcher(lambda buf:
> os.write(sys.stdout.fileno(), buf))
> p.stderr.buffer.add_buffer_watcher(lambda buf:
> os.write(sys.stderr.fileno(), buf))
>
> I guess that's a cleaner proposal, although I pretty much assumed that
> the overhead of such a hook being checked for on every buffer read
> would be unacceptable. So I came up with a clumsier approach based on
> trying to make it so you only paid the cost if you used the feature.
> Overall, that was probably a mistake :-(
>
> I hope it's clearer now.
>
> Paul
>



-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/3f7d3b48/attachment.html>

From skip.montanaro at gmail.com  Mon Feb  2 19:00:07 2015
From: skip.montanaro at gmail.com (Skip Montanaro)
Date: Mon, 2 Feb 2015 12:00:07 -0600
Subject: [Python-ideas] More useful slices
In-Reply-To: <5e6328ea-3897-4536-8e84-54185d3a7287@googlegroups.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
 <f828d7ea-3d27-4df8-b237-4fcc630d4f6c@googlegroups.com>
 <5e6328ea-3897-4536-8e84-54185d3a7287@googlegroups.com>
Message-ID: <CANc-5Uy=3g_dH1Nue3D7-NgMQTjSpGcq4ARVnir4Mm-DyT6JDg@mail.gmail.com>

On Mon, Feb 2, 2015 at 9:29 AM, Neil Girdhar <mistersheik at gmail.com> wrote:
> This proposal is definitely *possible*, but is the only argument in
> its favor saving 5 characters?  You already have the shorthand
> exactly when you most need it (indexing).

As I indicated in an earlier response, there is some performance value
to replacing the range function call with this (or other) syntactic
sugar.  While exceedingly rare, there is nothing to prevent a
programmer from redefining the range builtin function or inserting a
different version of range() in the local or global scopes:

    def my_range(*args):
        # completely ignore args, returning something weird...
	return [1, "a", None, 2]

    import __builtin__
    __builtin__.range = my_range

If you have a for loop which uses range():

    for i in range(27):
    	do something interesting with i

optimizers like PyPy which aim to be precisely compatible with CPython
semantics must look up "range" every time it occurs and decide if it's
the real builtin range function, in which case it can emit/execute
machine code which is something like the C for loop:

    for (i=start; i<stop; i+=step) {
        do something interesting with i
    }

or not, in which case it has to call whatever range is, then respond
with its list of (arbtrary) Python objects.

Syntactic sugar would lock down the standard behavior.

Skip

From python at mrabarnett.plus.com  Mon Feb  2 19:03:52 2015
From: python at mrabarnett.plus.com (MRAB)
Date: Mon, 02 Feb 2015 18:03:52 +0000
Subject: [Python-ideas] Infinite in Python (not just in datetime)
In-Reply-To: <CALGmxELhvV7gXmqaN4VyvJyoHDw0crv-x6OvQh3t2+1ODYFbiQ@mail.gmail.com>
References: <54CF32A0.50009@thomas-guettler.de>
 <20150202102948.GR2498@ando.pearwood.info>
 <54CF9757.2010301@thomas-guettler.de>
 <CALGmxELhvV7gXmqaN4VyvJyoHDw0crv-x6OvQh3t2+1ODYFbiQ@mail.gmail.com>
Message-ID: <54CFBC08.8080003@mrabarnett.plus.com>

On 2015-02-02 16:45, Chris Barker wrote:
> On Mon, Feb 2, 2015 at 7:27 AM, Thomas G?ttler
> <guettliml at thomas-guettler.de <mailto:guettliml at thomas-guettler.de>> wrote:
>
>         Infinite what?
>
>     Integer
>
> Well, inf is supported in floats because it is supported in the native
> machine double. I suspect that adding inf and -inf (and NaN) to integers
> would end up being a pretty heavy-weight addition.
>
Python 3's int isn't limited to a native machine integer, so it's not
impossible to extend it to include infinity...

> On the other hand, my quicky off the cuff implementation of InfDateTime
> is, in fact, a  universal infinity -- i.e. it compares as greater than
> any other object, not just a datetime object. I did that more because it
> was easy than anything else, but perhaps it could be generally useful to
> have a Inf and -Inf object, kind of like None.
>
> Or it would just create a huge set of confusing corner cases :-)
>
And who knows what it could break!

> I would much rather see a infinite datetime object that a big discussion
> about the implications of a generic infinite object -- it is focused,
> has proven used cases, and wouldn't impact anything beyond datetime.
>


From abarnert at yahoo.com  Mon Feb  2 19:07:28 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Mon, 2 Feb 2015 10:07:28 -0800
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
Message-ID: <EFFB9F8A-34B6-473B-9744-983849551121@yahoo.com>

On Feb 2, 2015, at 6:53, Paul Moore <p.f.moore at gmail.com> wrote:

> There's a lot of flexibility in the new layered IO system, but one
> thing it doesn't allow is any means of adding "hooks" to the data
> flow, or manipulation of an already-created io object.

Why do you need to add hooks to an already-created object? Why not just create the subclassed (and effectively hooked) object in place of the original one? Obviously that requires building the stack of raw/buffered/text manually instead, which requires a few extra lines in subprocess or wherever else you want to do it, but that doesn't seem like a huge burden for something this uncommon.

The advantage is that you don't need to worry about exposing the buffer, making raw read-write, or anything else complicated. The disadvantage is that you can't do this if you've already started reading or writing--but that doesn't seem to apply to this use case, or to most other potential use cases.

As for Guido's concerns about subclassing as an API mechanism: You can easily translate this into a request to replace the os.read and os.write calls used by a raw io object; then, whether you do that externally or in a subclass, you get the same result.

The problem, either way, is that RawIOBase doesn't actually call os.read. Each implementation of the ABC does something different. For FileIO, I'm pretty sure it reads directly at the C level. A socket file calls recv on the socket. And so on. So, how does that affect your proposal?

> For example, when a subprocess.Popen object uses a pipe for the
> child's stdout, the data is captured instead of writing it to the
> console. Sometimes it would be nice to capture it, but still write to
> the console. That would be easy to do if we could wrap the underlying
> RawIOBase object and intercept read() calls[1]. A subclass of
> RawIOBase can do this trivially, but there's no way of replacing the
> class on an existing stream.
> 
> The obvious approach would be to reassign the "raw" attribute of the
> BufferedIOBase object, but that's readonly. Would it be possible to
> make it read/write?

Making it read/write is a couple lines of
C (plus some Python code for implementations that use pyio instead of _io). The problem is the buffer. If the raw you're replacing happens to reference the same file descriptor (or, I guess, another fd for the same file with the same file position) it would all work, but that seems to be stretching "consenting adults" freedom a bit.

Also, you still need some way to construct your HookedRawIO subclassed object. Is HookedRawIO a wrapper that provides the RawIOBase interface but delegates to another RawIOBase? Or does it share or take over or dup the fd? Or ...

> Or provide another way of replacing the raw IO
> object underlying an io object?

Somewhere in the bug database, someone (I think Nick Coghlan?) suggested a rewrap method on TextIOWrapper, which gives you a new TextIOWrapper around the same buffer (allowing you to override the other params). If the same idea were extended to the buffer classes, and if it allowed you to also replace the raw or buffer object (so the only thing you're "rewrapping" is the internal state), you could construct a HookedRawIO, then call rewrap on the buffer replacing it's raw, then call rewrap on the text replacing its buffer, then set the stdout attribute of the popen. But I'm not sure that's any cleaner.

> I'm sure there are buffer integrity issues to work out, but are there
> any more fundamental problems with this approach?
> 
> Paul
> 
> [1] Actually, it's *not* that easy, because subprocess.Popen objects
> are insanely hard to subclass - there are no hooks into the pipe
> creation process, and no way to intercept the object before the
> subprocess gets run (that happens in the __init__ method). But that's
> a separate issue, and also the subject of a different thread here.
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From abarnert at yahoo.com  Mon Feb  2 19:10:32 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Mon, 2 Feb 2015 10:10:32 -0800
Subject: [Python-ideas] Infinite in Python (not just in datetime)
In-Reply-To: <54CFBC08.8080003@mrabarnett.plus.com>
References: <54CF32A0.50009@thomas-guettler.de>
 <20150202102948.GR2498@ando.pearwood.info>
 <54CF9757.2010301@thomas-guettler.de>
 <CALGmxELhvV7gXmqaN4VyvJyoHDw0crv-x6OvQh3t2+1ODYFbiQ@mail.gmail.com>
 <54CFBC08.8080003@mrabarnett.plus.com>
Message-ID: <9D160595-E4A6-4568-9B8A-710C3FA6C3D4@yahoo.com>

On Feb 2, 2015, at 10:03, MRAB <python at mrabarnett.plus.com> wrote:

> On 2015-02-02 16:45, Chris Barker wrote:
>> On Mon, Feb 2, 2015 at 7:27 AM, Thomas G?ttler
>> <guettliml at thomas-guettler.de <mailto:guettliml at thomas-guettler.de>> wrote:
>> 
>>        Infinite what?
>> 
>>    Integer
>> 
>> Well, inf is supported in floats because it is supported in the native
>> machine double. I suspect that adding inf and -inf (and NaN) to integers
>> would end up being a pretty heavy-weight addition.
> Python 3's int isn't limited to a native machine integer, so it's not
> impossible to extend it to include infinity...

I doubt it would be a serious performance problem if done right. The big question is how it interacts with other types. (Those confusing corner cases already brought up...)

>> On the other hand, my quicky off the cuff implementation of InfDateTime
>> is, in fact, a  universal infinity -- i.e. it compares as greater than
>> any other object, not just a datetime object.

Including float('inf') and float('nan')? That doesn't seem like a selling point...

>> I did that more because it
>> was easy than anything else, but perhaps it could be generally useful to
>> have a Inf and -Inf object, kind of like None.
>> 
>> Or it would just create a huge set of confusing corner cases :-)
> And who knows what it could break!
> 
>> I would much rather see a infinite datetime object that a big discussion
>> about the implications of a generic infinite object -- it is focused,
>> has proven used cases, and wouldn't impact anything beyond datetime.
> 
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From p.f.moore at gmail.com  Mon Feb  2 19:19:12 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 2 Feb 2015 18:19:12 +0000
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <EFFB9F8A-34B6-473B-9744-983849551121@yahoo.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <EFFB9F8A-34B6-473B-9744-983849551121@yahoo.com>
Message-ID: <CACac1F9CeEKiM7dsT4kscZ-SOd9nG=GsjJm4TjrPRBxG-WQv1w@mail.gmail.com>

On 2 February 2015 at 18:07, Andrew Barnert <abarnert at yahoo.com> wrote:
> On Feb 2, 2015, at 6:53, Paul Moore <p.f.moore at gmail.com> wrote:
>
>> There's a lot of flexibility in the new layered IO system, but one
>> thing it doesn't allow is any means of adding "hooks" to the data
>> flow, or manipulation of an already-created io object.
>
> Why do you need to add hooks to an already-created object? Why not just create the subclassed (and effectively hooked) object in place of the original one? Obviously that requires building the stack of raw/buffered/text manually instead, which requires a few extra lines in subprocess or wherever else you want to do it, but that doesn't seem like a huge burden for something this uncommon.

Largely because the IO object creation is hidden in the bowels of
subprocess, to be honest. There's no doubt that the underlying problem
is that subprocess is hostile to any form of subclassing or
modification. And given that I have this problem in existing code,
anything proposed here (which will be for 3.5 or maybe 3.6 at best)
isn't going to be a solution for my immediate problem.

My intention here was that, having had this sort of issue a few times
(all variations on "I wish I could see when the IO object requests
data from the OS, and do something at that point with the data the OS
supplied") I thought there might be a good case for a general facility
to do this, The feature was also available in the old AT&T "sfio"
library which was a more flexible replacement for C's stdio, so it's
not a new idea.

Paul

From edk141 at gmail.com  Mon Feb  2 19:22:07 2015
From: edk141 at gmail.com (Ed Kellett)
Date: Mon, 02 Feb 2015 18:22:07 +0000
Subject: [Python-ideas] Infinite in Python (not just in datetime)
References: <54CF32A0.50009@thomas-guettler.de>
 <20150202102948.GR2498@ando.pearwood.info>
 <54CF9757.2010301@thomas-guettler.de>
 <CALGmxELhvV7gXmqaN4VyvJyoHDw0crv-x6OvQh3t2+1ODYFbiQ@mail.gmail.com>
Message-ID: <CABmzr0ge8dJEMejHjr43ifnXL9B8D3spBaPqhtkPv_kf5cU2sA@mail.gmail.com>

On Mon Feb 02 2015 at 4:52:49 PM Chris Barker <chris.barker at noaa.gov> wrote:

> On Mon, Feb 2, 2015 at 7:27 AM, Thomas G?ttler <
> guettliml at thomas-guettler.de> wrote:
>
>> Infinite what?
>>>
>>
>> Integer
>>
>
> Well, inf is supported in floats because it is supported in the native
> machine double.
>

There's also the appeal to mathematics: integers and infinities are
mutually exclusive.

By contrast, floats aren't numbers (by most definitions) to begin with, so
there's no axiomatic basis to worry about breaking.

Ed Kellett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/96f06129/attachment.html>

From p.f.moore at gmail.com  Mon Feb  2 19:28:58 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 2 Feb 2015 18:28:58 +0000
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
Message-ID: <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>

On 2 February 2015 at 17:47, Guido van Rossum <guido at python.org> wrote:
> Perhaps you would be better off using the subprocess machinery in asyncio?
> It uses the subprocess module to manage the subprocess itself (both for
> Windows and for Unix-ish systems) but uses the async I/O machinery from the
> asyncio module (which is itself based on select or its better brethren).

Quite possibly. I found the asyncio docs a bit of a struggle, TBH. Is
there a tutorial? The basic idea is something along the lines of
https://docs.python.org/3.4/library/asyncio-subprocess.html#subprocess-using-streams
but I don't see how I'd modify that code (specifically the "yield from
proc.stdout.readline()" bit) to get output from either stdout or
stderr, whichever was ready first (and handle the 2 cases
differently).

I don't know whether event-based behaviour makes more sense to me than
the corioutine-based approach used by the asyncio documentation. OTOH,
the alternative protocol-based approach didn't seem to have an event
for data received on stderr, just the one for stdout. So maybe it's
not just the approach that's confusing me.

Paul

From breamoreboy at yahoo.co.uk  Mon Feb  2 19:38:30 2015
From: breamoreboy at yahoo.co.uk (Mark Lawrence)
Date: Mon, 02 Feb 2015 18:38:30 +0000
Subject: [Python-ideas] More useful slices
In-Reply-To: <CANc-5Uy=3g_dH1Nue3D7-NgMQTjSpGcq4ARVnir4Mm-DyT6JDg@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
 <f828d7ea-3d27-4df8-b237-4fcc630d4f6c@googlegroups.com>
 <5e6328ea-3897-4536-8e84-54185d3a7287@googlegroups.com>
 <CANc-5Uy=3g_dH1Nue3D7-NgMQTjSpGcq4ARVnir4Mm-DyT6JDg@mail.gmail.com>
Message-ID: <maog76$rl6$2@ger.gmane.org>

On 02/02/2015 18:00, Skip Montanaro wrote:
> On Mon, Feb 2, 2015 at 9:29 AM, Neil Girdhar <mistersheik at gmail.com> wrote:
>> This proposal is definitely *possible*, but is the only argument in
>> its favor saving 5 characters?  You already have the shorthand
>> exactly when you most need it (indexing).
>
> As I indicated in an earlier response, there is some performance value
> to replacing the range function call with this (or other) syntactic
> sugar.  While exceedingly rare, there is nothing to prevent a
> programmer from redefining the range builtin function or inserting a
> different version of range() in the local or global scopes:
>
>      def my_range(*args):
>          # completely ignore args, returning something weird...
> 	return [1, "a", None, 2]
>
>      import __builtin__
>      __builtin__.range = my_range
>
> If you have a for loop which uses range():
>
>      for i in range(27):
>      	do something interesting with i
>
> optimizers like PyPy which aim to be precisely compatible with CPython
> semantics must look up "range" every time it occurs and decide if it's
> the real builtin range function, in which case it can emit/execute
> machine code which is something like the C for loop:
>
>      for (i=start; i<stop; i+=step) {
>          do something interesting with i
>      }
>
> or not, in which case it has to call whatever range is, then respond
> with its list of (arbtrary) Python objects.
>
> Syntactic sugar would lock down the standard behavior.
>
> Skip

You (plural) can debate this until the cows come home, but as the BDFL 
rejected it a couple of hours ago it strikes me as an all round waste of 
time and bandwidth.

-- 
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence


From abarnert at yahoo.com  Mon Feb  2 19:51:54 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Mon, 2 Feb 2015 10:51:54 -0800
Subject: [Python-ideas] Infinite in Python (not just in datetime)
In-Reply-To: <CABmzr0ge8dJEMejHjr43ifnXL9B8D3spBaPqhtkPv_kf5cU2sA@mail.gmail.com>
References: <54CF32A0.50009@thomas-guettler.de>
 <20150202102948.GR2498@ando.pearwood.info>
 <54CF9757.2010301@thomas-guettler.de>
 <CALGmxELhvV7gXmqaN4VyvJyoHDw0crv-x6OvQh3t2+1ODYFbiQ@mail.gmail.com>
 <CABmzr0ge8dJEMejHjr43ifnXL9B8D3spBaPqhtkPv_kf5cU2sA@mail.gmail.com>
Message-ID: <48A100E2-6A42-4184-93F5-E0A0F9024CDD@yahoo.com>

On Feb 2, 2015, at 10:22, Ed Kellett <edk141 at gmail.com> wrote:

> On Mon Feb 02 2015 at 4:52:49 PM Chris Barker <chris.barker at noaa.gov> wrote:
>> On Mon, Feb 2, 2015 at 7:27 AM, Thomas G?ttler <guettliml at thomas-guettler.de> wrote:
>>>> Infinite what?
>>> 
>>> Integer
>> 
>> Well, inf is supported in floats because it is supported in the native machine double.
> 
> There's also the appeal to mathematics: integers and infinities are mutually exclusive.

An affinely-extended integer set is just as easy to define as an affinely-extended real set, and it preserves the laws of integer arithmetic with the obvious extensions to undefined results (e.g., a+(b+c) == (a+b)+c or both are undefined). In particular, undefined results can be safely modeled with either a quiet nan or with exceptions.

(Also, Python integers don't actually quite model the integers, because for large enough values you get a memory error.)

> By contrast, floats aren't numbers (by most definitions) to begin with, so there's no axiomatic basis to worry about breaking.

I suppose it depends how you define "number". It's true that they don't model the laws of arithmetic properly, but they do have a well-defined model (well, each implementation has a different one, but most people only care about the IEEE double/binary64 implementation) that's complete in analogs of the same operations as the reals, even if those analogs aren't actually the same operations.

But, more importantly, the floats with inf and -inf model the affinely extended real numbers the same way the floats without infinities model the reals, so there is definitely an axiomatic basis to the design, not just some haphazard choice Intel made for no good reason.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/159b510b/attachment.html>

From skip.montanaro at gmail.com  Mon Feb  2 20:20:30 2015
From: skip.montanaro at gmail.com (Skip Montanaro)
Date: Mon, 2 Feb 2015 13:20:30 -0600
Subject: [Python-ideas] More useful slices
In-Reply-To: <CALGmxELEW15+VEXC3wd+KbZ6_zbFkZbd=LVJ2jrYwJ9ZtvNurw@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
 <f828d7ea-3d27-4df8-b237-4fcc630d4f6c@googlegroups.com>
 <5e6328ea-3897-4536-8e84-54185d3a7287@googlegroups.com>
 <CANc-5Uy=3g_dH1Nue3D7-NgMQTjSpGcq4ARVnir4Mm-DyT6JDg@mail.gmail.com>
 <CALGmxELEW15+VEXC3wd+KbZ6_zbFkZbd=LVJ2jrYwJ9ZtvNurw@mail.gmail.com>
Message-ID: <CANc-5Uy=9N+jbQVb579wyZ_x8WfiFDzXdRvXPS9DRDp4378FdQ@mail.gmail.com>

On Mon, Feb 2, 2015 at 12:46 PM, Chris Barker <chris.barker at noaa.gov> wrote:

> On Mon, Feb 2, 2015 at 10:00 AM, Skip Montanaro <skip.montanaro at gmail.com>
> wrote:
>
>> As I indicated in an earlier response, there is some performance value
>> to replacing the range function call with this (or other) syntactic
>> sugar.
>
>
> It's my impression that adding stuff like this primarily for performance
> is nothing on the list ;-)
>

I wasn't arguing for its inclusion, just explaining that there is
potentially more than a few characters of typing to be saved. (Someone
asked if that was the only benefit of the proposal.)

FWIW, at least some compilers (Shed Skin, I believe, perhaps Cython and/or
Pyrex as well) already treat range() as a builtin constant.

Skip
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/44b7dc81/attachment.html>

From python at 2sn.net  Mon Feb  2 20:24:44 2015
From: python at 2sn.net (Alexander Heger)
Date: Tue, 3 Feb 2015 06:24:44 +1100
Subject: [Python-ideas] change list() - branch from "More useful slices"
Message-ID: <CAN3CYHwdu-6Qjsvscs1okpuAZ5qNjZ9Vuf6UjizybTUcDpu-+A@mail.gmail.com>

On 3 February 2015 at 01:44, Rob Cliffe <rob.cliffe at btinternet.com> wrote:
>
> On 02/02/2015 12:48, Jo?o Santos wrote:
>>
>> L = ['foo', 'bar', 'baz'] is syntactic sugar, you can write L =
>> list('foo', 'bar', 'baz')
>
> No you can't (at least not in Python 2.7.3).  (A good advertisement for
> testing your code before publishing it. )  It has to be L = list(('foo',
> 'bar', 'baz')) which is using a tuple literal.

Well, if something useful comes out of this discussion, maybe it would
be an idea to extend list to allow the former syntax naively assumed,

list('foo', 'bar', 'baz')

similar to what exists for dict():

dict(a=1, b=2, c=3)

though there would be excessive ambiguity for a single argument that
is an iterable, like

list('abc')

which gives ['a','b','c'] not ['abc']

and hence a source many mistakes...

Well, just a thought, maybe not a good one. I am sure this must have
been discussed in the past, and rejected.

-Alexander

From random832 at fastmail.us  Mon Feb  2 20:32:31 2015
From: random832 at fastmail.us (random832 at fastmail.us)
Date: Mon, 02 Feb 2015 14:32:31 -0500
Subject: [Python-ideas] Infinite in Python (not just in datetime)
In-Reply-To: <54CF9757.2010301@thomas-guettler.de>
References: <54CF32A0.50009@thomas-guettler.de>
 <20150202102948.GR2498@ando.pearwood.info>
 <54CF9757.2010301@thomas-guettler.de>
Message-ID: <1422905551.3376353.222120529.4DBF96CB@webmail.messagingengine.com>

On Mon, Feb 2, 2015, at 10:27, Thomas G?ttler wrote:
> > Infinite what?
> 
> Integer
> 
> A infinite timedelta could be created like this:
> 
> datetime.timedelta(days=infinite)
> 
>    Thomas G?ttler

You'd still have to decide how to represent it, since timedelta doesn't
internally use PyLongObject. (I've asked why these types don't support
the unbounded range of integers before, and was told "YAGNI".)

From chris.barker at noaa.gov  Mon Feb  2 19:46:14 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Mon, 2 Feb 2015 10:46:14 -0800
Subject: [Python-ideas] More useful slices
In-Reply-To: <CANc-5Uy=3g_dH1Nue3D7-NgMQTjSpGcq4ARVnir4Mm-DyT6JDg@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
 <f828d7ea-3d27-4df8-b237-4fcc630d4f6c@googlegroups.com>
 <5e6328ea-3897-4536-8e84-54185d3a7287@googlegroups.com>
 <CANc-5Uy=3g_dH1Nue3D7-NgMQTjSpGcq4ARVnir4Mm-DyT6JDg@mail.gmail.com>
Message-ID: <CALGmxELEW15+VEXC3wd+KbZ6_zbFkZbd=LVJ2jrYwJ9ZtvNurw@mail.gmail.com>

On Mon, Feb 2, 2015 at 10:00 AM, Skip Montanaro <skip.montanaro at gmail.com>
wrote:

> As I indicated in an earlier response, there is some performance value
> to replacing the range function call with this (or other) syntactic
> sugar.


It's my impression that adding stuff like this primarily for performance is
nothing on the list ;-)

But you could get the same effect by making range a keyword -- how often
does anyone re-define it?

And if you allow variables:

range(i,j,k)  or (i:j:k)

Then any optimizer needs to to run-time type checking, or type inference,
or what have you anyway, so checking if the name range is the range object
seems like a small issue.

FWIW, Cython does turn range calls into simple C loops:

for i in range(10):

becomes:

for (__pyx_t_1 = 0; __pyx_t_1 < 10; __pyx_t_1+=1) {

...
}

In that case, the cython "language" doesn't allow re-defining of "range"

And it will do this if either literals or variables that have been
type-def'd as integer types are passed in as arguments.

What difference this really makes to performance I have no idea -- you'd
have to have something pretty trivial in that loop to notice it.

-Chris





> While exceedingly rare, there is nothing to prevent a
> programmer from redefining the range builtin function or inserting a
> different version of range() in the local or global scopes:
>
>     def my_range(*args):
>         # completely ignore args, returning something weird...
>         return [1, "a", None, 2]
>
>     import __builtin__
>     __builtin__.range = my_range
>
> If you have a for loop which uses range():
>
>     for i in range(27):
>         do something interesting with i
>
> optimizers like PyPy which aim to be precisely compatible with CPython
> semantics must look up "range" every time it occurs and decide if it's
> the real builtin range function, in which case it can emit/execute
> machine code which is something like the C for loop:
>
>     for (i=start; i<stop; i+=step) {
>         do something interesting with i
>     }
>
> or not, in which case it has to call whatever range is, then respond
> with its list of (arbtrary) Python objects.
>
> Syntactic sugar would lock down the standard behavior.
>
> Skip
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/37fad5d4/attachment.html>

From p.f.moore at gmail.com  Mon Feb  2 20:43:28 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 2 Feb 2015 19:43:28 +0000
Subject: [Python-ideas] change list() - branch from "More useful slices"
In-Reply-To: <CAN3CYHwdu-6Qjsvscs1okpuAZ5qNjZ9Vuf6UjizybTUcDpu-+A@mail.gmail.com>
References: <CAN3CYHwdu-6Qjsvscs1okpuAZ5qNjZ9Vuf6UjizybTUcDpu-+A@mail.gmail.com>
Message-ID: <CACac1F9cEhoKRboRYm=V3LV=FudfBba6ALvO2X94+i3UNFdSEw@mail.gmail.com>

On 2 February 2015 at 19:24, Alexander Heger <python at 2sn.net> wrote:
> Well, if something useful comes out of this discussion, maybe it would
> be an idea to extend list to allow the former syntax naively assumed,
>
> list('foo', 'bar', 'baz')

So there would be a discontinuity - list('foo') would have to be ['f',
'o', 'o'] for backward compatibility. And that makes list(*args)
problematic, as it behaves differently if len(args)==1.

Paul

From thomas at kluyver.me.uk  Mon Feb  2 20:47:28 2015
From: thomas at kluyver.me.uk (Thomas Kluyver)
Date: Mon, 2 Feb 2015 11:47:28 -0800
Subject: [Python-ideas] change list() - branch from "More useful slices"
In-Reply-To: <CACac1F9cEhoKRboRYm=V3LV=FudfBba6ALvO2X94+i3UNFdSEw@mail.gmail.com>
References: <CAN3CYHwdu-6Qjsvscs1okpuAZ5qNjZ9Vuf6UjizybTUcDpu-+A@mail.gmail.com>
 <CACac1F9cEhoKRboRYm=V3LV=FudfBba6ALvO2X94+i3UNFdSEw@mail.gmail.com>
Message-ID: <CAOvn4qjzwSwx7QTAyCLkM4=uQSbFum9WgmUeUmTZ_G8hH3zuZQ@mail.gmail.com>

On 2 February 2015 at 11:43, Paul Moore <p.f.moore at gmail.com> wrote:

> So there would be a discontinuity - list('foo') would have to be ['f',
> 'o', 'o'] for backward compatibility. And that makes list(*args)
> problematic, as it behaves differently if len(args)==1.
>

max() and min() do behave like this - they accept either a single iterable,
or *args - so it's not unprecedented. But I don't see any benefit to adding
it to the list() constructor.

Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/08815fb0/attachment.html>

From rymg19 at gmail.com  Mon Feb  2 22:19:19 2015
From: rymg19 at gmail.com (Ryan Gonzalez)
Date: Mon, 2 Feb 2015 15:19:19 -0600
Subject: [Python-ideas] change list() - branch from "More useful slices"
In-Reply-To: <CAOvn4qjzwSwx7QTAyCLkM4=uQSbFum9WgmUeUmTZ_G8hH3zuZQ@mail.gmail.com>
References: <CAN3CYHwdu-6Qjsvscs1okpuAZ5qNjZ9Vuf6UjizybTUcDpu-+A@mail.gmail.com>
 <CACac1F9cEhoKRboRYm=V3LV=FudfBba6ALvO2X94+i3UNFdSEw@mail.gmail.com>
 <CAOvn4qjzwSwx7QTAyCLkM4=uQSbFum9WgmUeUmTZ_G8hH3zuZQ@mail.gmail.com>
Message-ID: <CAO41-mPv82X8_rp+pVhat1E8aZrsO3xFUkfqb61hGOq0yafpyA@mail.gmail.com>

On Mon, Feb 2, 2015 at 1:47 PM, Thomas Kluyver <thomas at kluyver.me.uk> wrote:

> On 2 February 2015 at 11:43, Paul Moore <p.f.moore at gmail.com> wrote:
>
>> So there would be a discontinuity - list('foo') would have to be ['f',
>> 'o', 'o'] for backward compatibility. And that makes list(*args)
>> problematic, as it behaves differently if len(args)==1.
>>
>
> max() and min() do behave like this - they accept either a single
> iterable, or *args - so it's not unprecedented. But I don't see any benefit
> to adding it to the list() constructor.
>

Because the maximum/minimum of a single value makes absolutely no sense,
but a single-element list does.


>
>
> Thomas
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
Ryan
If anybody ever asks me why I prefer C++ to C, my answer will be simple:
"It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was
nul-terminated."
Personal reality distortion fields are immune to contradictory evidence. -
srean
Check out my website: http://kirbyfan64.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/56e5090c/attachment-0001.html>

From eltoder at gmail.com  Mon Feb  2 23:03:37 2015
From: eltoder at gmail.com (Eugene Toder)
Date: Mon, 2 Feb 2015 17:03:37 -0500
Subject: [Python-ideas] change list() - branch from "More useful slices"
In-Reply-To: <CAO41-mPv82X8_rp+pVhat1E8aZrsO3xFUkfqb61hGOq0yafpyA@mail.gmail.com>
References: <CAN3CYHwdu-6Qjsvscs1okpuAZ5qNjZ9Vuf6UjizybTUcDpu-+A@mail.gmail.com>
 <CACac1F9cEhoKRboRYm=V3LV=FudfBba6ALvO2X94+i3UNFdSEw@mail.gmail.com>
 <CAOvn4qjzwSwx7QTAyCLkM4=uQSbFum9WgmUeUmTZ_G8hH3zuZQ@mail.gmail.com>
 <CAO41-mPv82X8_rp+pVhat1E8aZrsO3xFUkfqb61hGOq0yafpyA@mail.gmail.com>
Message-ID: <CA+KNMz=JoSSVrPtk6pSHASweYn202dLcP57M0AH3x1RWzz0V7Q@mail.gmail.com>

On Mon, Feb 2, 2015 at 4:19 PM, Ryan Gonzalez <rymg19 at gmail.com> wrote:

> On Mon, Feb 2, 2015 at 1:47 PM, Thomas Kluyver <thomas at kluyver.me.uk>
> wrote:
>
>> max() and min() do behave like this - they accept either a single
>> iterable, or *args - so it's not unprecedented. But I don't see any benefit
>> to adding it to the list() constructor.
>>
>
> Because the maximum/minimum of a single value makes absolutely no sense,
> but a single-element list does.
>
Arguably, min and max of a single value is that same value. This is what
min() and max() return when you pass them a 1-element sequence.

Eugene
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/7542625c/attachment.html>

From g.brandl at gmx.net  Mon Feb  2 23:21:00 2015
From: g.brandl at gmx.net (Georg Brandl)
Date: Mon, 02 Feb 2015 23:21:00 +0100
Subject: [Python-ideas] change list() - branch from "More useful slices"
In-Reply-To: <CA+KNMz=JoSSVrPtk6pSHASweYn202dLcP57M0AH3x1RWzz0V7Q@mail.gmail.com>
References: <CAN3CYHwdu-6Qjsvscs1okpuAZ5qNjZ9Vuf6UjizybTUcDpu-+A@mail.gmail.com>
 <CACac1F9cEhoKRboRYm=V3LV=FudfBba6ALvO2X94+i3UNFdSEw@mail.gmail.com>
 <CAOvn4qjzwSwx7QTAyCLkM4=uQSbFum9WgmUeUmTZ_G8hH3zuZQ@mail.gmail.com>
 <CAO41-mPv82X8_rp+pVhat1E8aZrsO3xFUkfqb61hGOq0yafpyA@mail.gmail.com>
 <CA+KNMz=JoSSVrPtk6pSHASweYn202dLcP57M0AH3x1RWzz0V7Q@mail.gmail.com>
Message-ID: <maot8c$o5d$1@ger.gmane.org>

On 02/02/2015 11:03 PM, Eugene Toder wrote:
> On Mon, Feb 2, 2015 at 4:19 PM, Ryan Gonzalez
> <rymg19 at gmail.com
> <mailto:rymg19 at gmail.com>> wrote:
> 
>     On Mon, Feb 2, 2015 at 1:47 PM, Thomas Kluyver
>     <thomas at kluyver.me.uk
>     <mailto:thomas at kluyver.me.uk>> wrote:
> 
>         max() and min() do behave like this - they accept either a single
>         iterable, or *args - so it's not unprecedented. But I don't see any
>         benefit to adding it to the list() constructor.
> 
> 
>     Because the maximum/minimum of a single value makes absolutely no sense, but
>     a single-element list does.
> 
> Arguably, min and max of a single value is that same value. This is what min()
> and max() return when you pass them a 1-element sequence.

Yes, but that's beside the point.  The point is that for max and min
it is never needed to write "min(a)" or "max(a)" where a is not a
sequence -- since it's the same as just "a".

Georg


From rob.cliffe at btinternet.com  Mon Feb  2 23:25:49 2015
From: rob.cliffe at btinternet.com (Rob Cliffe)
Date: Mon, 02 Feb 2015 22:25:49 +0000
Subject: [Python-ideas] change list() - branch from "More useful slices"
In-Reply-To: <maot8c$o5d$1@ger.gmane.org>
References: <CAN3CYHwdu-6Qjsvscs1okpuAZ5qNjZ9Vuf6UjizybTUcDpu-+A@mail.gmail.com>
 <CACac1F9cEhoKRboRYm=V3LV=FudfBba6ALvO2X94+i3UNFdSEw@mail.gmail.com>
 <CAOvn4qjzwSwx7QTAyCLkM4=uQSbFum9WgmUeUmTZ_G8hH3zuZQ@mail.gmail.com>
 <CAO41-mPv82X8_rp+pVhat1E8aZrsO3xFUkfqb61hGOq0yafpyA@mail.gmail.com>
 <CA+KNMz=JoSSVrPtk6pSHASweYn202dLcP57M0AH3x1RWzz0V7Q@mail.gmail.com>
 <maot8c$o5d$1@ger.gmane.org>
Message-ID: <54CFF96D.50500@btinternet.com>


On 02/02/2015 22:21, Georg Brandl wrote:
> On 02/02/2015 11:03 PM, Eugene Toder wrote:
>> On Mon, Feb 2, 2015 at 4:19 PM, Ryan Gonzalez
>> <rymg19 at gmail.com
>> <mailto:rymg19 at gmail.com>> wrote:
>>
>>      On Mon, Feb 2, 2015 at 1:47 PM, Thomas Kluyver
>>      <thomas at kluyver.me.uk
>>      <mailto:thomas at kluyver.me.uk>> wrote:
>>
>>          max() and min() do behave like this - they accept either a single
>>          iterable, or *args - so it's not unprecedented. But I don't see any
>>          benefit to adding it to the list() constructor.
>>
>>
>>      Because the maximum/minimum of a single value makes absolutely no sense, but
>>      a single-element list does.
>>
>> Arguably, min and max of a single value is that same value. This is what min()
>> and max() return when you pass them a 1-element sequence.
> Yes, but that's beside the point.  The point is that for max and min
> it is never needed to write "min(a)" or "max(a)" where a is not a
> sequence -- since it's the same as just "a".
>
> Georg
>
>
In fact,
     min(3,4) == min((3,4)) ==  min((3,)) == 3
but
     min(3) raises an error
whereas attempting to create a single-item list should not be an error.
Rob Cliffe


From greg.ewing at canterbury.ac.nz  Mon Feb  2 23:25:54 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Tue, 03 Feb 2015 11:25:54 +1300
Subject: [Python-ideas] More useful slices
In-Reply-To: <CANc-5Uy=3g_dH1Nue3D7-NgMQTjSpGcq4ARVnir4Mm-DyT6JDg@mail.gmail.com>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
 <f828d7ea-3d27-4df8-b237-4fcc630d4f6c@googlegroups.com>
 <5e6328ea-3897-4536-8e84-54185d3a7287@googlegroups.com>
 <CANc-5Uy=3g_dH1Nue3D7-NgMQTjSpGcq4ARVnir4Mm-DyT6JDg@mail.gmail.com>
Message-ID: <54CFF972.7090405@canterbury.ac.nz>

Skip Montanaro wrote:
> optimizers like PyPy which aim to be precisely compatible with CPython
> semantics must look up "range" every time it occurs and decide if it's
> the real builtin range function, in which case it can emit/execute
> machine code which is something like the C for loop:
> 
>     for (i=start; i<stop; i+=step) {
>         do something interesting with i
>     }

I'd rather have a more flexible way of ierating over integer
ranges that's not biased towards closed-start open-end
intervals.

While that's convenient for indexing sequences,
if that's all you want to do you're better off iterating
over the sequence directly, maybe using enumerate and/or
zip. If you really need to iterate over ints, you
probably need them for some other purpose, in which
case you're likely to want any combination of
open/closed start/end.

For Pyrex I came up with a syntax that allows specifying
any combination equally easily and clearly:

    for 0 <= i < 10:
       # closed start, open end

    for 0 <= i <= 10:
       # both ends closed

    for 0 < i < 10:
       # both ends open

    for 10 >= i >= 0:
       # both ends closed, going backwards

etc.

I think something like this would be a much better use
of new syntax than just syntactic sugar for something
that's not used very often.

-- 
Greg

From rob.cliffe at btinternet.com  Mon Feb  2 23:44:39 2015
From: rob.cliffe at btinternet.com (Rob Cliffe)
Date: Mon, 02 Feb 2015 22:44:39 +0000
Subject: [Python-ideas] More useful slices
In-Reply-To: <54CFF972.7090405@canterbury.ac.nz>
References: <CAFpSVpKVGTywE5kuhQHf2WiK4jc9C+BV6r_AtJiEtwmWfxAS=Q@mail.gmail.com>
 <CAFpSVpJHcCvPh+q279kMb2w-_n2Dr4S0SuJWiYaCmy2R_HC27Q@mail.gmail.com>
 <CAFpSVpJdRFj4NAVZYVfhh8LQFoON5Ny8kzjvdTjri296-WkbqA@mail.gmail.com>
 <CAFpSVp+qbK83xbH0jz91xYw+ts1HMOVCu36rs6rkGhuKoOWOUw@mail.gmail.com>
 <CAFpSVpLdvBtY+kf1ix9PWOOLTfqPsw1MqSv=5oOLLNWQMfYSMg@mail.gmail.com>
 <CAFpSVpJkfaySjhO6FS-=iNaZUOmMNix9B81vE9+oTw-=DEgrvQ@mail.gmail.com>
 <CAFpSVpKMjDVFAh6oR=Dcy+BkW67FwVQX3X=KttLWLigkTke2Ew@mail.gmail.com>
 <CAFpSVpJVY69jvPfOisx1DEYc-3mEVYia0Cp5Gt0Y0kBL-WD6fA@mail.gmail.com>
 <20150201172712.GO2498@ando.pearwood.info>
 <f828d7ea-3d27-4df8-b237-4fcc630d4f6c@googlegroups.com>
 <5e6328ea-3897-4536-8e84-54185d3a7287@googlegroups.com>
 <CANc-5Uy=3g_dH1Nue3D7-NgMQTjSpGcq4ARVnir4Mm-DyT6JDg@mail.gmail.com>
 <54CFF972.7090405@canterbury.ac.nz>
Message-ID: <54CFFDD7.8030402@btinternet.com>


On 02/02/2015 22:25, Greg Ewing wrote:
> Skip Montanaro wrote:
>> optimizers like PyPy which aim to be precisely compatible with CPython
>> semantics must look up "range" every time it occurs and decide if it's
>> the real builtin range function, in which case it can emit/execute
>> machine code which is something like the C for loop:
>>
>>     for (i=start; i<stop; i+=step) {
>>         do something interesting with i
>>     }
>
> I'd rather have a more flexible way of ierating over integer
> ranges that's not biased towards closed-start open-end
> intervals.
>
> While that's convenient for indexing sequences,
> if that's all you want to do you're better off iterating
> over the sequence directly, maybe using enumerate and/or
> zip. If you really need to iterate over ints, you
> probably need them for some other purpose, in which
> case you're likely to want any combination of
> open/closed start/end.
>
> For Pyrex I came up with a syntax that allows specifying
> any combination equally easily and clearly:
>
>    for 0 <= i < 10:
>       # closed start, open end
>
>    for 0 <= i <= 10:
>       # both ends closed
>
>    for 0 < i < 10:
>       # both ends open
>
>    for 10 >= i >= 0:
>       # both ends closed, going backwards
>
> etc.
>
> I think something like this would be a much better use
> of new syntax than just syntactic sugar for something
> that's not used very often.
>
I like it!
Much clearer than any other suggested syntax.
Flexible: Allows closed or open ends, whichever is more convenient or 
better style.
Unambiguous:  'for' not followed by non-nested 'in' is currently not 
valid syntax.
Efficient: allows faster bytecode to be generated.
+1
Rob Cliffe

From tjreedy at udel.edu  Tue Feb  3 01:29:25 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Mon, 02 Feb 2015 19:29:25 -0500
Subject: [Python-ideas] change list() - branch from "More useful slices"
In-Reply-To: <CAN3CYHwdu-6Qjsvscs1okpuAZ5qNjZ9Vuf6UjizybTUcDpu-+A@mail.gmail.com>
References: <CAN3CYHwdu-6Qjsvscs1okpuAZ5qNjZ9Vuf6UjizybTUcDpu-+A@mail.gmail.com>
Message-ID: <map4p5$ih8$1@ger.gmane.org>

On 2/2/2015 2:24 PM, Alexander Heger wrote:

> Well, if something useful comes out of this discussion, maybe it would
> be an idea to extend list to allow the former syntax naively assumed,
>
> list('foo', 'bar', 'baz')

I see no point to this versus ['foo', 'bar', 'baz']

> similar to what exists for dict():
>
> dict(a=1, b=2, c=3)

The point to this is that the alternative {'a': 1, 'b': 2, 'c': 3) is a 
bit more awkward to write due to the quoting.


-- 
Terry Jan Reedy


From python at 2sn.net  Tue Feb  3 01:40:38 2015
From: python at 2sn.net (Alexander Heger)
Date: Tue, 3 Feb 2015 11:40:38 +1100
Subject: [Python-ideas] change list() - branch from "More useful slices"
In-Reply-To: <map4p5$ih8$1@ger.gmane.org>
References: <CAN3CYHwdu-6Qjsvscs1okpuAZ5qNjZ9Vuf6UjizybTUcDpu-+A@mail.gmail.com>
 <map4p5$ih8$1@ger.gmane.org>
Message-ID: <CAN3CYHytbV6yG0nYzEbYvF3KZRUWBe5akkUEJ25QChErcgAK=w@mail.gmail.com>

>> Well, if something useful comes out of this discussion, maybe it would
>> be an idea to extend list to allow the former syntax naively assumed,
>>
>> list('foo', 'bar', 'baz')
>
> I see no point to this versus ['foo', 'bar', 'baz']
>
>> similar to what exists for dict():
>>
>> dict(a=1, b=2, c=3)

The point is that makes it easier for people to remember if they can
do it in a similar way.
(same would be for tuple()).  This way you could have pragmatic
verbose forms for all three, the only point of confusion being if
there is only one argument, which is iterable, the behaviour will be
different, i.e., the current default behaviour.

If a single non-iterable item gives a list as well, that could be
nice, but makes the outcome dependent on the type of the object.  A
concern could be that the result is less easy to predict when just
reading code.  But so are many things.

-Alexander

From ethan at stoneleaf.us  Tue Feb  3 01:55:15 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Mon, 02 Feb 2015 16:55:15 -0800
Subject: [Python-ideas] change list() - branch from "More useful slices"
In-Reply-To: <CAN3CYHytbV6yG0nYzEbYvF3KZRUWBe5akkUEJ25QChErcgAK=w@mail.gmail.com>
References: <CAN3CYHwdu-6Qjsvscs1okpuAZ5qNjZ9Vuf6UjizybTUcDpu-+A@mail.gmail.com>
 <map4p5$ih8$1@ger.gmane.org>
 <CAN3CYHytbV6yG0nYzEbYvF3KZRUWBe5akkUEJ25QChErcgAK=w@mail.gmail.com>
Message-ID: <54D01C73.3090102@stoneleaf.us>

On 02/02/2015 04:40 PM, Alexander Heger wrote:
>>> Well, if something useful comes out of this discussion, maybe it would
>>> be an idea to extend list to allow the former syntax naively assumed,
>>>
>>> list('foo', 'bar', 'baz')
>>
>> I see no point to this versus ['foo', 'bar', 'baz']
>>
>>> similar to what exists for dict():
>>>
>>> dict(a=1, b=2, c=3)
> 
> The point is that makes it easier for people to remember if they can
> do it in a similar way.
> (same would be for tuple()).  This way you could have pragmatic
> verbose forms for all three, the only point of confusion being if
> there is only one argument, which is iterable, the behaviour will be
> different, i.e., the current default behaviour.
> 
> If a single non-iterable item gives a list as well, that could be
> nice, but makes the outcome dependent on the type of the object.  A
> concern could be that the result is less easy to predict when just
> reading code.  But so are many things.

The one-item case will be a bug-magnet, and outweigh the benefits of not typing [] or ().  That is a bad trade.

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150202/e25cfe7b/attachment.sig>

From greg.ewing at canterbury.ac.nz  Wed Feb  4 09:32:46 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 04 Feb 2015 21:32:46 +1300
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
 reads/writes
In-Reply-To: <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
Message-ID: <54D1D92E.7050102@canterbury.ac.nz>

Paul Moore wrote:
> I found the asyncio docs a bit of a struggle, TBH. Is
> there a tutorial? The basic idea is something along the lines of
> https://docs.python.org/3.4/library/asyncio-subprocess.html#subprocess-using-streams
> but I don't see how I'd modify that code (specifically the "yield from
> proc.stdout.readline()" bit) to get output from either stdout or
> stderr, whichever was ready first (and handle the 2 cases
> differently).

One way is to use two subsidiary coroutines, one
for each pipe.

The following seems to work:

#-----------------------------------------------------

import asyncio.subprocess
import sys

@asyncio.coroutine
def handle_pipe(label, pipe):
     while 1:
         data = yield from pipe.readline()
         if not data:
             return
         line = data.decode('ascii').rstrip()
         print("%s: %s" % (label, line))

@asyncio.coroutine
def run_subprocess():
     cmd = 'echo foo ; echo blarg >&2'
     proc = yield from asyncio.create_subprocess_exec(
         '/bin/sh', '-c', cmd,
             stdout = asyncio.subprocess.PIPE,
             stderr = asyncio.subprocess.PIPE)
     h1 = asyncio.async(handle_pipe("STDOUT", proc.stdout))
     h2 = asyncio.async(handle_pipe("STDERR", proc.stderr))
     yield from asyncio.wait([h1, h2])

if sys.platform == "win32":
     loop = asyncio.ProactorEventLoop()
     asyncio.set_event_loop(loop)
else:
     loop = asyncio.get_event_loop()

loop.run_until_complete(run_subprocess())

#-----------------------------------------------------

% python3.4 asyncio_subprocess.py
STDOUT: foo
STDERR: blarg

-- 
Greg

From p.f.moore at gmail.com  Wed Feb  4 10:08:42 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Wed, 4 Feb 2015 09:08:42 +0000
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <54D1D92E.7050102@canterbury.ac.nz>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
Message-ID: <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>

On 4 February 2015 at 08:32, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Paul Moore wrote:
>>
>> I found the asyncio docs a bit of a struggle, TBH. Is
>> there a tutorial? The basic idea is something along the lines of
>>
>> https://docs.python.org/3.4/library/asyncio-subprocess.html#subprocess-using-streams
>> but I don't see how I'd modify that code (specifically the "yield from
>> proc.stdout.readline()" bit) to get output from either stdout or
>> stderr, whichever was ready first (and handle the 2 cases
>> differently).
>
>
> One way is to use two subsidiary coroutines, one
> for each pipe.
>
> The following seems to work:
[...]

Thanks! I'll play with that and see if I can get my head round it.

Paul

From p.f.moore at gmail.com  Wed Feb  4 10:32:50 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Wed, 4 Feb 2015 09:32:50 +0000
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
 <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
Message-ID: <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>

On 4 February 2015 at 09:08, Paul Moore <p.f.moore at gmail.com> wrote:
>> The following seems to work:
> [...]
>
> Thanks! I'll play with that and see if I can get my head round it.

It works for me, but I get an error on termination:

RuntimeError: <_overlapped.Overlapped object at 0x00000000033C56F0>
still has pending operation at deallocation, the process may crash

I'm guessing that there's a missing wait on some object, but I'm not
sure which (or how to identify what's wrong from the error). I'll keep
digging, but on the assumption that it works on Unix, there may be a
subtle portability issue here.

Paul

From greg.ewing at canterbury.ac.nz  Wed Feb  4 12:32:27 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 05 Feb 2015 00:32:27 +1300
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
 reads/writes
In-Reply-To: <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
 <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
 <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
Message-ID: <54D2034B.2000205@canterbury.ac.nz>

Paul Moore wrote:

> It works for me, but I get an error on termination:
> 
> RuntimeError: <_overlapped.Overlapped object at 0x00000000033C56F0>
> still has pending operation at deallocation, the process may crash

Hmmm, you could try adding this at the bottom
of run_subprocess:

     yield from proc.wait()

I didn't think that would be necessary, because
the pipes won't get closed until the subprocess
is finished, but maybe Windows is fussier.

-- 
Greg

From p.f.moore at gmail.com  Wed Feb  4 14:18:30 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Wed, 4 Feb 2015 13:18:30 +0000
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <54D2034B.2000205@canterbury.ac.nz>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
 <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
 <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
 <54D2034B.2000205@canterbury.ac.nz>
Message-ID: <CACac1F_3TpHnbcFaf6TLHtpN1AOYaHvfdUPKEVJuhY+7wpEHhw@mail.gmail.com>

On 4 February 2015 at 11:32, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Hmmm, you could try adding this at the bottom
> of run_subprocess:
>
>     yield from proc.wait()

Yeah, I had already tried that, but no improvement.

... found it. You need loop.close() at the end. Maybe the loop object
should close itself in the __del__ method, like file objects?

Paul

From guido at python.org  Wed Feb  4 17:02:43 2015
From: guido at python.org (Guido van Rossum)
Date: Wed, 4 Feb 2015 08:02:43 -0800
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
 <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
 <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
Message-ID: <CAP7+vJJsStTqzp4xxAFPayd-E-cHsHPm_xnHKu=-fgE5vgEmZQ@mail.gmail.com>

Regarding the error, Victor has made a lot of fixes to the iocp code for
asyncio. Maybe it's fixed in the repo?
On Feb 4, 2015 1:33 AM, "Paul Moore" <p.f.moore at gmail.com> wrote:

> On 4 February 2015 at 09:08, Paul Moore <p.f.moore at gmail.com> wrote:
> >> The following seems to work:
> > [...]
> >
> > Thanks! I'll play with that and see if I can get my head round it.
>
> It works for me, but I get an error on termination:
>
> RuntimeError: <_overlapped.Overlapped object at 0x00000000033C56F0>
> still has pending operation at deallocation, the process may crash
>
> I'm guessing that there's a missing wait on some object, but I'm not
> sure which (or how to identify what's wrong from the error). I'll keep
> digging, but on the assumption that it works on Unix, there may be a
> subtle portability issue here.
>
> Paul
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150204/ebcae374/attachment.html>

From greg.ewing at canterbury.ac.nz  Wed Feb  4 21:21:07 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 05 Feb 2015 09:21:07 +1300
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
 reads/writes
In-Reply-To: <CACac1F_3TpHnbcFaf6TLHtpN1AOYaHvfdUPKEVJuhY+7wpEHhw@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
 <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
 <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
 <54D2034B.2000205@canterbury.ac.nz>
 <CACac1F_3TpHnbcFaf6TLHtpN1AOYaHvfdUPKEVJuhY+7wpEHhw@mail.gmail.com>
Message-ID: <54D27F33.3050204@canterbury.ac.nz>

Paul Moore wrote:
> ... found it. You need loop.close() at the end. Maybe the loop object
> should close itself in the __del__ method, like file objects?

Yeah, this looks like a bug -- I didn't notice anything
in the docs about it being mandatory to close() a loop
when you're finished with it, and such a requirement
seems rather unpythonic.

-- 
Greg

From guido at python.org  Wed Feb  4 21:55:46 2015
From: guido at python.org (Guido van Rossum)
Date: Wed, 4 Feb 2015 12:55:46 -0800
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <54D27F33.3050204@canterbury.ac.nz>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
 <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
 <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
 <54D2034B.2000205@canterbury.ac.nz>
 <CACac1F_3TpHnbcFaf6TLHtpN1AOYaHvfdUPKEVJuhY+7wpEHhw@mail.gmail.com>
 <54D27F33.3050204@canterbury.ac.nz>
Message-ID: <CAP7+vJLP0acO2zoDoZO_yoePptFw-N96fBHzoTr_ZekBLRG6QA@mail.gmail.com>

Well, there is a lot of I/O machinery that keeps itself alive unless you
explicitly close it. So __del__ is unlikely to be called until it's too
late (at the dreaded module teardown stage). We may have to document this
better, but explicit closing a loop is definitely strongly recommended. On
Windows I think it's mandatory if you use the IOCP loop.

On Wed, Feb 4, 2015 at 12:21 PM, Greg Ewing <greg.ewing at canterbury.ac.nz>
wrote:

> Paul Moore wrote:
>
>> ... found it. You need loop.close() at the end. Maybe the loop object
>> should close itself in the __del__ method, like file objects?
>>
>
> Yeah, this looks like a bug -- I didn't notice anything
> in the docs about it being mandatory to close() a loop
> when you're finished with it, and such a requirement
> seems rather unpythonic.
>
> --
> Greg
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150204/890d7537/attachment.html>

From jdhardy at gmail.com  Wed Feb  4 22:10:38 2015
From: jdhardy at gmail.com (Jeff Hardy)
Date: Wed, 4 Feb 2015 21:10:38 +0000
Subject: [Python-ideas] Python Mobile-sig
Message-ID: <CAF7AXFE4gcTcm69yygVjAAhkX0cB2b_TjoX6vb5_s+8UWe2Ssw@mail.gmail.com>

There is now a Python SIG dedicated to running Python on mobile
devices. Sign up for the mobile-sig at python.org mailing list and find
out more at [1].

The Mobile-SIG exists to improve the usability of Python on mobile
devices, as mobile platforms have their own unique requirements and
constraints.

There are two goals:
* To collaborate on porting CPython to mobile platforms and ensure
that other Python implementations (i.e. Jython, IronPython) present
much the same environment as CPython when running on mobile devices.
* To collaborate on the design of modules for mobile-centric features
such as cameras, GPS, accelerometers, and other sensors to ensure
cross-platform and cross-implementation compatibility.

The intent is to work on mobile-centric features in this group before
proposal to python-dev, similar to how the distutils-sig operates.

- Jeff

[1] https://www.python.org/community/sigs/current/mobile-sig/

From greg.ewing at canterbury.ac.nz  Thu Feb  5 06:05:25 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 05 Feb 2015 18:05:25 +1300
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
 reads/writes
In-Reply-To: <CAP7+vJLP0acO2zoDoZO_yoePptFw-N96fBHzoTr_ZekBLRG6QA@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
 <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
 <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
 <54D2034B.2000205@canterbury.ac.nz>
 <CACac1F_3TpHnbcFaf6TLHtpN1AOYaHvfdUPKEVJuhY+7wpEHhw@mail.gmail.com>
 <54D27F33.3050204@canterbury.ac.nz>
 <CAP7+vJLP0acO2zoDoZO_yoePptFw-N96fBHzoTr_ZekBLRG6QA@mail.gmail.com>
Message-ID: <54D2FA15.1060008@canterbury.ac.nz>

Guido van Rossum wrote:
> Well, there is a lot of I/O machinery that keeps itself alive unless you 
> explicitly close it. So __del__ is unlikely to be called until it's too 
> late (at the dreaded module teardown stage).

I thought we weren't doing module teardown any more?

-- 
Greg

From ncoghlan at gmail.com  Thu Feb  5 11:53:15 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 5 Feb 2015 20:53:15 +1000
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <54D2FA15.1060008@canterbury.ac.nz>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
 <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
 <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
 <54D2034B.2000205@canterbury.ac.nz>
 <CACac1F_3TpHnbcFaf6TLHtpN1AOYaHvfdUPKEVJuhY+7wpEHhw@mail.gmail.com>
 <54D27F33.3050204@canterbury.ac.nz>
 <CAP7+vJLP0acO2zoDoZO_yoePptFw-N96fBHzoTr_ZekBLRG6QA@mail.gmail.com>
 <54D2FA15.1060008@canterbury.ac.nz>
Message-ID: <CADiSq7dMC6H7=FiyMeryv9f=MW6551UTnrXfGpEZVMqpseMcWQ@mail.gmail.com>

On 5 February 2015 at 15:05, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Guido van Rossum wrote:
>>
>> Well, there is a lot of I/O machinery that keeps itself alive unless you
>> explicitly close it. So __del__ is unlikely to be called until it's too late
>> (at the dreaded module teardown stage).
>
>
> I thought we weren't doing module teardown any more?

It's still there as a last resort. The thing that changed is that
__del__ methods don't necessarily prevent cycle collection any more,
so more stuff should be cleaned up nicely by the cyclic GC before we
hit whatever is left with the module cleanup hammer.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From chris.barker at noaa.gov  Thu Feb  5 01:48:41 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Wed, 4 Feb 2015 16:48:41 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
Message-ID: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>

Hi folks,

Time for Take 2 (or is that take 200?)

No, I haven't dropped this ;-) After the lengthy thread, I went through and
tried to take into account all the thoughts and comments. The result is a
PEP with a bunch more explanation, and some slightly different decisions.

TL;DR:

Please take a look and express your approval or disapproval (+1, +0, -0,
-1, or whatever). Please keep in mind that this has been discussed a lot,
so try to not to re-iterate what's already been said. And I'd really like
it if you only gave -1, thumbs down, etc, if you really think this is worse
than not having anything at all, rather than you'd just prefer something a
little different. Also, if you do think this version is worse than nothing,
but a different choice or two could change that, then let us know what --
maybe we can reach consensus on something slightly different.

Also keep in mid that the goal here (in the words of Nick Coghlan):
"""
the key requirement here should be "provide a binary float
comparison function that is significantly less wrong than the current
'a == b'"
"""
No need to split hairs.

https://www.python.org/dev/peps/pep-0485/

Full PEP below, and in gitHub (with example code, tests, etc.) here:

https://github.com/PythonCHB/close_pep

Full Detail:
=========

Here are the big changes:

 * Symmetric test. Essentially, whether you want the symmetric or
asymmetric test depends on exactly what question you are asking, but I
think we need to pick one. It makes little to no difference for most use
cases (see below) , but I was persuaded by:

  - Principle of least surprise -- equality is symmetric, shouldn't
"closeness" be too?
  - People don't want to have to remember what order to put the arguments
--even if it doesn't matter, you have to think about whether it matters. A
symmetric test doesn't require that.
  - There are times when the order of comparison is not known -- for
example if the users wants to test a bunch of values all against each-other.

On the other hand -- the asymmetric test is a better option only when you
are specifically asking the question: is this (computed) value within a
precise tolerance of another(known) value, and that tolerance is fairly
large (i.e 1% - 10%). While this was brought up on this thread, no one had
an actual use-case like that. And is it really that hard to write:

known - computed <= 0.1* expected


NOTE: that my example code has a flag with which you can pick which test to
use -- I did that for experimentation, but I think we want a simple API
here. As soon as you provide that option people need to concern themselves
with what to use.

* Defaults: I set the default relative tolerance to 1e-9 -- because that is
a little more than half the precision available in a Python float, and it's
the largest tolerance for which ALL the possible tests result in exactly
the same result for all possible float values (see the PEP for the math).
The default for absolute tolerance is 0.0. Better for people to get a
failed test with a check for zero right away than accidentally use a
totally inappropriate value.

Contentious Issues
===============

* The Symmetric vs. Asymmetric thing -- some folks seemed to have string
ideas about that, but i hope we can all agree that something is better than
nothing, and symmetric is probably the least surprising. And take a look at
the math in teh PEP -- for small tolerance, which is the likely case for
most tests -- it literally makes no difference at all which test is used.

* Default for abs_tol -- or even whether to have it or not. This is a tough
one -- there are a lot of use-cases for people testing against zero, so
it's a pity not to have it do something reasonable by default in that case.
However, there really is no reasonable universal default.

As Guido said:
 """
For someone who for whatever reason is manipulating quantities that are in
the range of 1e-100,
1e-12 is about as large as infinity.
"""
And testing against zero really requires a absolute tolerance test, which
is not hard to simply write:

abs(my_val) <= some_tolerance

and is covered by the unittest assert already.

So why include an absolute_tolerance at all? -- Because this is likely to
be used inside a comprehension or in the a unittest sequence comparison, so
it is very helpful to be able to test a bunch of values some of which may
be zero, all with a single function with a single set of parameters. And a
similar approach has proven to be useful for numpy and the statistics test
module. And again, the default is abs_tol=0.0 -- if the user sticks with
defaults, they won't get bitten.

* Also -- not really contentious (I hope) but I left the details out about
how the unittest assertion would work. That's because I don't really use
unittest -- I'm hoping someone else will write that. If it comes down to
it, I can do it -- it will probably look a lot like the existing sequence
comparing assertions.

OK -- I think that's it.

Let's see if we can drive this home.

-Chris


PEP: 485
Title: A Function for testing approximate equality
Version: $Revision$
Last-Modified: $Date$
Author: Christopher Barker <Chris.Barker at noaa.gov>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 20-Jan-2015
Python-Version: 3.5
Post-History:


Abstract
========

This PEP proposes the addition of a function to the standard library
that determines whether one value is approximately equal or "close" to
another value. It is also proposed that an assertion be added to the
``unittest.TestCase`` class to provide easy access for those using
unittest for testing.


Rationale
=========

Floating point values contain limited precision, which results in
their being unable to exactly represent some values, and for error to
accumulate with repeated computation.  As a result, it is common
advice to only use an equality comparison in very specific situations.
Often a inequality comparison fits the bill, but there are times
(often in testing) where the programmer wants to determine whether a
computed value is "close" to an expected value, without requiring them
to be exactly equal. This is common enough, particularly in testing,
and not always obvious how to do it, so it would be useful addition to
the standard library.


Existing Implementations
------------------------

The standard library includes the ``unittest.TestCase.assertAlmostEqual``
method, but it:

* Is buried in the unittest.TestCase class

* Is an assertion, so you can't use it as a general test at the command
  line, etc. (easily)

* Is an absolute difference test. Often the measure of difference
  requires, particularly for floating point numbers, a relative error,
  i.e "Are these two values within x% of each-other?", rather than an
  absolute error. Particularly when the magnatude of the values is
  unknown a priori.

The numpy package has the ``allclose()`` and ``isclose()`` functions,
but they are only available with numpy.

The statistics package tests include an implementation, used for its
unit tests.

One can also find discussion and sample implementations on Stack
Overflow and other help sites.

Many other non-python systems provide such a test, including the Boost C++
library and the APL language (reference?).

These existing implementations indicate that this is a common need and
not trivial to write oneself, making it a candidate for the standard
library.


Proposed Implementation
=======================

NOTE: this PEP is the result of an extended discussion on the
python-ideas list [1]_.

The new function will have the following signature::

  is_close(a, b, rel_tolerance=1e-9, abs_tolerance=0.0)

``a`` and ``b``: are the two values to be tested to relative closeness

``rel_tolerance``: is the relative tolerance -- it is the amount of
error allowed, relative to the magnitude a and b. For example, to set
a tolerance of 5%, pass tol=0.05. The default tolerance is 1e-8, which
assures that the two values are the same within about 8 decimal
digits.

``abs_tolerance``: is an minimum absolute tolerance level -- useful for
comparisons near zero.

Modulo error checking, etc, the function will return the result of::

  abs(a-b) <= max( rel_tolerance * min(abs(a), abs(b), abs_tolerance )


Handling of non-finite numbers
------------------------------

The IEEE 754 special values of NaN, inf, and -inf will be handled
according to IEEE rules. Specifically, NaN is not considered close to
any other value, including NaN. inf and -inf are only considered close
to themselves.


Non-float types
---------------

The primary use-case is expected to be floating point numbers.
However, users may want to compare other numeric types similarly. In
theory, it should work for any type that supports ``abs()``,
comparisons, and subtraction.  The code will be written and tested to
accommodate these types:

* ``Decimal``: for Decimal, the tolerance must be set to a Decimal type.

* ``int``

* ``Fraction``

* ``complex``: for complex, ``abs(z)`` will be used for scaling and
  comparison.


Behavior near zero
------------------

Relative comparison is problematic if either value is zero. By
definition, no value is small relative to zero. And computationally,
if either value is zero, the difference is the absolute value of the
other value, and the computed absolute tolerance will be rel_tolerance
times that value. rel-tolerance is always less than one, so the
difference will never be less than the tolerance.

However, while mathematically correct, there are many use cases where
a user will need to know if a computed value is "close" to zero. This
calls for an absolute tolerance test. If the user needs to call this
function inside a loop or comprehension, where some, but not all, of
the expected values may be zero, it is important that both a relative
tolerance and absolute tolerance can be tested for with a single
function with a single set of parameters.

There is a similar issue if the two values to be compared straddle zero:
if a is approximately equal to -b, then a and b will never be computed
as "close".

To handle this case, an optional parameter, ``abs_tolerance`` can be
used to set a minimum tolerance used in the case of very small or zero
computed absolute tolerance. That is, the values will be always be
considered close if the difference between them is less than the
abs_tolerance

The default absolute tolerance value is set to zero because there is
no value that is appropriate for the general case. It is impossible to
know an appropriate value without knowing the likely values expected
for a given use case. If all the values tested are on order of one,
then a value of about 1e-8 might be appropriate, but that would be far
too large if expected values are on order of 1e-12 or smaller.

Any non-zero default might result in user's tests passing totally
inappropriately. If, on the other hand a test against zero fails the
first time with defaults, a user will be prompted to select an
appropriate value for the problem at hand in order to get the test to
pass.

NOTE: that the author of this PEP has resolved to go back over many of
his tests that use the numpy ``all_close()`` function, which provides
a default abs_tolerance, and make sure that the default value is
appropriate.

If the user sets the rel_tolerance parameter to 0.0, then only the
absolute tolerance will effect the result. While not the goal of the
function, it does allow it to be used as a purely absolute tolerance
check as well.

unittest assertion
-------------------

[need text here]

implementation
--------------

A sample implementation is available (as of Jan 22, 2015) on gitHub:

https://github.com/PythonCHB/close_pep/blob/master

This implementation has a flag that lets the user select which
relative tolerance test to apply -- this PEP does not suggest that
that be retained, but rather than the strong test be selected.

Relative Difference
===================

There are essentially two ways to think about how close two numbers
are to each-other:

Absolute difference: simply ``abs(a-b)``

Relative difference: ``abs(a-b)/scale_factor`` [2]_.

The absolute difference is trivial enough that this proposal focuses
on the relative difference.

Usually, the scale factor is some function of the values under
consideration, for instance:

1) The absolute value of one of the input values

2) The maximum absolute value of the two

3) The minimum absolute value of the two.

4) The absolute value of the arithmetic mean of the two

These lead to the following possibilities for determining if two
values, a and b, are close to each other.

1) ``abs(a-b) <= tol*abs(a)``

2) ``abs(a-b) <= tol * max( abs(a), abs(b) )``

3) ``abs(a-b) <= tol * min( abs(a), abs(b) )``

4) ``abs(a-b) <= tol * (a + b)/2``

NOTE: (2) and (3) can also be written as:

2) ``(abs(a-b) <= tol*abs(a)) or (abs(a-b) <= tol*abs(a))``

3) ``(abs(a-b) <= tol*abs(a)) and (abs(a-b) <= tol*abs(a))``

(Boost refers to these as the "weak" and "strong" formulations [3]_)
These can be a tiny bit more computationally efficient, and thus are
used in the example code.

Each of these formulations can lead to slightly different results.
However, if the tolerance value is small, the differences are quite
small. In fact, often less than available floating point precision.

How much difference does it make?
---------------------------------

When selecting a method to determine closeness, one might want to know
how much  of a difference it could make to use one test or the other
-- i.e. how many values are there (or what range of values) that will
pass one test, but not the other.

The largest difference is between options (2) and (3) where the
allowable absolute difference is scaled by either the larger or
smaller of the values.

Define ``delta`` to be the difference between the allowable absolute
tolerance defined by the larger value and that defined by the smaller
value. That is, the amount that the two input values need to be
different in order to get a different result from the two tests.
``tol`` is the relative tolerance value.

Assume that ``a`` is the larger value and that both ``a`` and ``b``
are positive, to make the analysis a bit easier. ``delta`` is
therefore::

  delta = tol * (a-b)


or::

  delta / tol = (a-b)


The largest absolute difference that would pass the test: ``(a-b)``,
equals the tolerance times the larger value::

  (a-b) = tol * a


Substituting into the expression for delta::

  delta / tol = tol * a


so::

  delta = tol**2 * a


For example, for ``a = 10``, ``b = 9``, ``tol = 0.1`` (10%):

maximum tolerance ``tol * a == 0.1 * 10 == 1.0``

minimum tolerance ``tol * b == 0.1 * 9.0 == 0.9``

delta = ``(1.0 - 0.9) * 0.1 = 0.1`` or  ``tol**2 * a = 0.1**2 * 10 = .01``

The absolute difference between the maximum and minimum tolerance
tests in this case could be substantial. However, the primary use
case for the proposed function is testing the results of computations.
In that case a relative tolerance is likely to be selected of much
smaller magnitude.

For example, a relative tolerance of ``1e-8`` is about half the
precision available in a python float. In that case, the difference
between the two tests is ``1e-8**2 * a`` or ``1e-16 * a``, which is
close to the limit of precision of a python float. If the relative
tolerance is set to the proposed default of 1e-9 (or smaller), the
difference between the two tests will be lost to the limits of
precision of floating point. That is, each of the four methods will
yield exactly the same results for all values of a and b.

In addition, in common use, tolerances are defined to 1 significant
figure -- that is, 1e-8 is specifying about 8 decimal digits of
accuracy. So the difference between the various possible tests is well
below the precision to which the tolerance is specified.


Symmetry
--------

A relative comparison can be either symmetric or non-symmetric. For a
symmetric algorithm:

``is_close_to(a,b)`` is always the same as ``is_close_to(b,a)``

If a relative closeness test uses only one of the values (such as (1)
above), then the result is asymmetric, i.e. is_close_to(a,b) is not
necessarily the same as is_close_to(b,a).

Which approach is most appropriate depends on what question is being
asked. If the question is: "are these two numbers close to each
other?", there is no obvious ordering, and a symmetric test is most
appropriate.

However, if the question is: "Is the computed value within x% of this
known value?", then it is appropriate to scale the tolerance to the
known value, and an asymmetric test is most appropriate.

>From the previous section, it is clear that either approach would
yield the same or similar results in the common use cases. In that
case, the goal of this proposal is to provide a function that is least
likely to produce surprising results.

The symmetric approach provide an appealing consistency -- it
mirrors the symmetry of equality, and is less likely to confuse
people. A symmetric test also relieves the user of the need to think
about the order in which to set the arguments.  It was also pointed
out that there may be some cases where the order of evaluation may not
be well defined, for instance in the case of comparing a set of values
all against each other.

There may be cases when a user does need to know that a value is
within a particular range of a known value. In that case, it is easy
enough to simply write the test directly::

  if a-b <= tol*a:

(assuming a > b in this case). There is little need to provide a
function for this particular case.

This proposal uses a symmetric test.

Which symmetric test?
---------------------

There are three symmetric tests considered:

The case that uses the arithmetic mean of the two values requires that
the value be either added together before dividing by 2, which could
result in extra overflow to inf for very large numbers, or require
each value to be divided by two before being added together, which
could result in underflow to -inf for very small numbers. This effect
would only occur at the very limit of float values, but it was decided
there as no benefit to the method worth reducing the range of
functionality.

This leaves the boost "weak" test (2)-- or using the smaller value to
scale the tolerance, or the Boost "strong" (3) test, which uses the
smaller of the values to scale the tolerance. For small tolerance,
they yield the same result, but this proposal uses the boost "strong"
test case: it is symmetric and provides a slightly stricter criteria
for tolerance.


Defaults
========

Default values are required for the relative and absolute tolerance.

Relative Tolerance Default
--------------------------

The relative tolerance required for two values to be considered
"close" is entirely use-case dependent. Nevertheless, the relative
tolerance needs to be less than 1.0, and greater than 1e-16
(approximate precision of a python float). The value of 1e-9 was
selected because it is the largest relative tolerance for which the
various possible methods will yield the same result, and it is also
about half of the precision available to a python float. In the
general case, a good numerical algorithm is not expected to lose more
than about half of available digits of accuracy, and if a much larger
tolerance is acceptable, the user should be considering the proper
value in that case. Thus 1-e9 is expected to "just work" for many
cases.

Absolute tolerance default
--------------------------

The absolute tolerance value will be used primarily for comparing to
zero. The absolute tolerance required to determine if a value is
"close" to zero is entirely use-case dependent. There is also
essentially no bounds to the useful range -- expected values would
conceivably be anywhere within the limits of a python float.  Thus a
default of 0.0 is selected.

If, for a given use case, a user needs to compare to zero, the test
will be guaranteed to fail the first time, and the user can select an
appropriate value.

It was suggested that comparing to zero is, in fact, a common use case
(evidence suggest that the numpy functions are often used with zero).
In this case, it would be desirable to have a "useful" default. Values
around 1-e8 were suggested, being about half of floating point
precision for values of around value 1.

However, to quote The Zen: "In the face of ambiguity, refuse the
temptation to guess." Guessing that users will most often be concerned
with values close to 1.0 would lead to spurious passing tests when used
with smaller values -- this is potentially more damaging than
requiring the user to thoughtfully select an appropriate value.


Expected Uses
=============

The primary expected use case is various forms of testing -- "are the
results computed near what I expect as a result?" This sort of test
may or may not be part of a formal unit testing suite. Such testing
could be used one-off at the command line, in an iPython notebook,
part of doctests, or simple assets in an ``if __name__ == "__main__"``
block.

The proposed unitest.TestCase assertion would have course be used in
unit testing.

It would also be an appropriate function to use for the termination
criteria for a simple iterative solution to an implicit function::

    guess = something
    while True:
        new_guess = implicit_function(guess, *args)
        if is_close(new_guess, guess):
            break
        guess = new_guess


Inappropriate uses
------------------

One use case for floating point comparison is testing the accuracy of
a numerical algorithm. However, in this case, the numerical analyst
ideally would be doing careful error propagation analysis, and should
understand exactly what to test for. It is also likely that ULP (Unit
in the Last Place) comparison may be called for. While this function
may prove useful in such situations, It is not intended to be used in
that way.


Other Approaches
================

``unittest.TestCase.assertAlmostEqual``
---------------------------------------

(
https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertAlmostEqual
)

Tests that values are approximately (or not approximately) equal by
computing the difference, rounding to the given number of decimal
places (default 7), and comparing to zero.

This method is purely an absolute tolerance test, and does not address
the need for a relative tolerance test.

numpy ``is_close()``
--------------------

http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.isclose.html

The numpy package provides the vectorized functions is_close() and
all_close, for similar use cases as this proposal:

``isclose(a, b, rtol=1e-05, atol=1e-08, equal_nan=False)``

      Returns a boolean array where two arrays are element-wise equal
      within a tolerance.

      The tolerance values are positive, typically very small numbers.
      The relative difference (rtol * abs(b)) and the absolute
      difference atol are added together to compare against the
      absolute difference between a and b

In this approach, the absolute and relative tolerance are added
together, rather than the ``or`` method used in this proposal. This is
computationally more simple, and if relative tolerance is larger than
the absolute tolerance, then the addition will have no effect. However,
if the absolute and relative tolerances are of similar magnitude, then
the allowed difference will be about twice as large as expected.

This makes the function harder to understand, with no computational
advantage in this context.

Even more critically, if the values passed in are small compared to
the absolute  tolerance, then the relative tolerance will be
completely swamped, perhaps unexpectedly.

This is why, in this proposal, the absolute tolerance defaults to zero
-- the user will be required to choose a value appropriate for the
values at hand.


Boost floating-point comparison
-------------------------------

The Boost project ( [3]_ ) provides a floating point comparison
function. Is is a symmetric approach, with both "weak" (larger of the
two relative errors) and "strong" (smaller of the two relative errors)
options. This proposal uses the Boost "strong" approach. There is no
need to complicate the API by providing the option to select different
methods when the results will be similar in most cases, and the user
is unlikely to know which to select in any case.


Alternate Proposals
-------------------


A Recipe
'''''''''

The primary alternate proposal was to not provide a standard library
function at all, but rather, provide a recipe for users to refer to.
This would have the advantage that the recipe could provide and
explain the various options, and let the user select that which is
most appropriate. However, that would require anyone needing such a
test to, at the very least, copy the function into their code base,
and select the comparison method to use.

In addition, adding the function to the standard library allows it to
be used in the ``unittest.TestCase.assertIsClose()`` method, providing
a substantial convenience to those using unittest.


``zero_tol``
''''''''''''

One possibility was to provide a zero tolerance parameter, rather than
the absolute tolerance parameter. This would be an absolute tolerance
that would only be applied in the case of one of the arguments being
exactly zero. This would have the advantage of retaining the full
relative tolerance behavior for all non-zero values, while allowing
tests against zero to work. However, it would also result in the
potentially surprising result that a small value could be "close" to
zero, but not "close" to an even smaller value. e.g., 1e-10 is "close"
to zero, but not "close" to 1e-11.


No absolute tolerance
'''''''''''''''''''''

Given the issues with comparing to zero, another possibility would
have been to only provide a relative tolerance, and let every
comparison to zero fail. In this case, the user would need to do a
simple absolute test: `abs(val) < zero_tol` in the case where the
comparison involved zero.

However, this would not allow the same call to be used for a sequence
of values, such as in a loop or comprehension, or in the
``TestCase.assertClose()`` method. Making the function far less
useful. It is noted that the default abs_tolerance=0.0 achieves the
same effect if the default is not overidden.

Other tests
''''''''''''

The other tests considered are all discussed in the Relative Error
section above.


References
==========

.. [1] Python-ideas list discussion threads

   https://mail.python.org/pipermail/python-ideas/2015-January/030947.html

   https://mail.python.org/pipermail/python-ideas/2015-January/031124.html

   https://mail.python.org/pipermail/python-ideas/2015-January/031313.html

.. [2] Wikipedia page on relative difference

   http://en.wikipedia.org/wiki/Relative_change_and_difference

.. [3] Boost project floating-point comparison algorithms


http://www.boost.org/doc/libs/1_35_0/libs/test/doc/components/test_tools/floating_point_comparison.html

.. Bruce Dawson's discussion of floating point.


https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/


Copyright
=========

This document has been placed in the public domain.


..
   Local Variables:
   mode: indented-text
   indent-tabs-mode: nil
   sentence-end-double-space: t
   fill-column: 70
   coding: utf-8
























-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150204/d36c411c/attachment-0001.html>

From guido at python.org  Thu Feb  5 18:56:58 2015
From: guido at python.org (Guido van Rossum)
Date: Thu, 5 Feb 2015 09:56:58 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
Message-ID: <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>

I think this looks good and I hope it'll get much approval. Thanks for
hanging in there! I did a quick review:

- Which module should this go into? (I think math, but I don't think you
state it.)

- Sometimes the default relative tolerance is described as 1e-8, sometimes
as 1e-9.

- The formula with "abs(a-b) <= max( ... min(..." is missing an
all-important close parenthesis. I presume it goes before the last comma.
:-)

- What if a tolerance is inf or nan? What if all args are inf or nan?

- I'd argue for allowing the tolerances to be floats even if a and b are
Decimals. There's not much difference in the outcome. Also, what if exactly
one of a and b is Decimal?

- Your language about complex confuses me. What is z? I'd expect the
following behavior:
  - tolerances cannot be complex
  - abs(a-b) is the right thing to use
  Or maybe you meant to say that abs() of the tolerances will be used if
they are complex? That makes some sense in case someone just mindlessly
passes a real cast to complex (i.e. with a zero imaginary part, or
near-zero in case it's the outcome of a very fuzzy computation). But still.
Maybe you can just give the formula to be used for complex args, if it is
in fact any different from the one for reals?

- For unittests are you proposing to replace assertAlmostEquals or
deprecate that and add a new assertIsCloseTo? (I hope the latter.)

- "there as no benefit" -- as->was

-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/6de170cb/attachment.html>

From Nikolaus at rath.org  Thu Feb  5 19:46:32 2015
From: Nikolaus at rath.org (Nikolaus Rath)
Date: Thu, 05 Feb 2015 10:46:32 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 (Chris Barker's message of "Wed, 4 Feb 2015 16:48:41 -0800")
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
Message-ID: <87oap8fb7b.fsf@thinkpad.rath.org>

Chris Barker <chris.barker-32lpuo7BZBA at public.gmane.org> writes:
> Modulo error checking, etc, the function will return the result of::
>
>   abs(a-b) <= max( rel_tolerance * min(abs(a), abs(b), abs_tolerance )

I think you're missing a closing parenthesis here.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             ?Time flies like an arrow, fruit flies like a Banana.?

From Nikolaus at rath.org  Thu Feb  5 19:49:57 2015
From: Nikolaus at rath.org (Nikolaus Rath)
Date: Thu, 05 Feb 2015 10:49:57 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 (Chris Barker's message of "Wed, 4 Feb 2015 16:48:41 -0800")
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
Message-ID: <87lhkcfb1m.fsf@thinkpad.rath.org>

Chris Barker <chris.barker-32lpuo7BZBA at public.gmane.org> writes:
> 1) ``abs(a-b) <= tol*abs(a)``
>
> 2) ``abs(a-b) <= tol * max( abs(a), abs(b) )``
>
> 3) ``abs(a-b) <= tol * min( abs(a), abs(b) )``
>
> 4) ``abs(a-b) <= tol * (a + b)/2``
>
> NOTE: (2) and (3) can also be written as:
>
> 2) ``(abs(a-b) <= tol*abs(a)) or (abs(a-b) <= tol*abs(a))``

I think the last 'a' should be a 'b'?

> 3) ``(abs(a-b) <= tol*abs(a)) and (abs(a-b) <= tol*abs(a))``

Same here?


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             ?Time flies like an arrow, fruit flies like a Banana.?

From ron3200 at gmail.com  Thu Feb  5 19:54:39 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Thu, 05 Feb 2015 12:54:39 -0600
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <87lhkcfb1m.fsf@thinkpad.rath.org>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <87lhkcfb1m.fsf@thinkpad.rath.org>
Message-ID: <mb0e9g$po9$1@ger.gmane.org>




From ron3200 at gmail.com  Thu Feb  5 20:08:52 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Thu, 05 Feb 2015 13:08:52 -0600
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <mb0e9g$po9$1@ger.gmane.org>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <87lhkcfb1m.fsf@thinkpad.rath.org> <mb0e9g$po9$1@ger.gmane.org>
Message-ID: <mb0f45$835$1@ger.gmane.org>

I meant for this not to send... LOL.

But since it did..


>> 2) ``abs(a-b) <= tol * max( abs(a), abs(b) )``
>>
>> 3) ``abs(a-b) <= tol * min( abs(a), abs(b) )``


>> > 2) ``(abs(a-b) <= tol*abs(a)) or (abs(a-b) <= tol*abs(a))``
>
> I think the last 'a' should be a 'b'?

>> > 3) ``(abs(a-b) <= tol*abs(a)) and (abs(a-b) <= tol*abs(a))``
>
> Same here?

Yes, I think they both should be 'b'.  Otherwise both sides are the same.

-Ron



From chris.barker at noaa.gov  Thu Feb  5 20:39:43 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Thu, 5 Feb 2015 11:39:43 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
Message-ID: <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>

I've updated the copy on gitHub with these notes.


> - Which module should this go into? (I think math, but I don't think you
> state it.)
>

added to text.


> - Sometimes the default relative tolerance is described as 1e-8, sometimes
> as 1e-9.
>

that's what I get for changing it -- fixed.

- The formula with "abs(a-b) <= max( ... min(..." is missing an
> all-important close parenthesis. I presume it goes before the last comma.
> :-)
>

fixed.


> - What if a tolerance is inf or nan? What if all args are inf or nan?
>

0.0 < rel_tol < 1.0)

I've enforced this in the sample code -- it's not strictly necessary, but
what does a negative relative tolerance mean, anyway, and a tolerance over
1.0 has some odd behaviour with zero and could hardly be called "close"
anyway. Text added to make this clear.

if a and/or b are inf or nan, it follows IEE754 (which mostly "just works"
with no special casing in the code.


> - I'd argue for allowing the tolerances to be floats even if a and b are
> Decimals. There's not much difference in the outcome. Also, what if exactly
> one of a and b is Decimal?
>

This resulted from tests failing with trying to convert a float to a
Decimal in the code. Though I can't find that problem right now. But maybe
type dispatch is a better way to handle that.


> - Your language about complex confuses me. What is z?
>

"standard" notation for an arbitrary complex value -- I've clarified the
language.


> I'd expect the following behavior:
>   - tolerances cannot be complex
>   - abs(a-b) is the right thing to use
>   Or maybe you meant to say that abs() of the tolerances will be used if
> they are complex? That makes some sense in case someone just mindlessly
> passes a real cast to complex (i.e. with a zero imaginary part, or
> near-zero in case it's the outcome of a very fuzzy computation). But still.
> Maybe you can just give the formula to be used for complex args, if it is
> in fact any different from the one for reals?
>

it's the same, with the exception of using isinf from cmath. At this point,
my example code multiplies the tolerance times the tested values, then
takes the absolute values -- so a complex tolerance will do complex
multiplication, then use the magnitude from that -- I'll have to think
about whether that is reasonable.

- For unittests are you proposing to replace assertAlmostEquals or
> deprecate that and add a new assertIsCloseTo? (I hope the latter.
>

Add an assertIsCloseTo -- whether assertAlmostEquals should be deprecated I
have no opinion on. Though it is useful, as it acts over a sequence -- I"m
guessing fols will want to keep it. Then we'll need them to reference each
other in the docs.

 On Thu, Feb 5, 2015 at 10:46 AM, Nikolaus Rath <Nikolaus at rath.org> wrote:

> > 2) ``(abs(a-b) <= tol*abs(a)) or (abs(a-b) <= tol*abs(a))``
>
> I think the last 'a' should be a 'b'?


indeed -- fixed.

Thanks,

-Chris

-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/201c877d/attachment-0001.html>

From marco.buttu at gmail.com  Thu Feb  5 21:14:59 2015
From: marco.buttu at gmail.com (Marco Buttu)
Date: Thu, 05 Feb 2015 21:14:59 +0100
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
Message-ID: <54D3CF43.70600@oa-cagliari.inaf.it>

On 05/02/2015 18:56, Guido van Rossum wrote:

> - Which module should this go into? (I think math, but I don't think 
> you state it.)

In that case it should be isclose(), for consistency with the other 
math.is* functions. But wherever we put it, I would suggest to change to 
isclose() because in the standard library we usually have issomething() 
instead of is_something().

-- 
Marco Buttu

INAF-Osservatorio Astronomico di Cagliari
Via della Scienza n. 5, 09047 Selargius (CA)
Phone: 070 711 80 217
Email: mbuttu at oa-cagliari.inaf.it


From chris.barker at noaa.gov  Thu Feb  5 22:03:52 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Thu, 5 Feb 2015 13:03:52 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <54D3CF43.70600@oa-cagliari.inaf.it>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
 <54D3CF43.70600@oa-cagliari.inaf.it>
Message-ID: <CALGmxEKgMG8WNuPXMHg+OEbj6tRsapi1McTRWVMmiEX=PiwD+w@mail.gmail.com>

On Thu, Feb 5, 2015 at 12:14 PM, Marco Buttu <marco.buttu at gmail.com> wrote:

> On 05/02/2015 18:56, Guido van Rossum wrote:
>
>  - Which module should this go into? (I think math, but I don't think you
>> state it.)
>>
>
> In that case it should be isclose(), for consistency with the other
> math.is* functions. But wherever we put it, I would suggest to change to
> isclose() because in the standard library we usually have issomething()
> instead of is_something().


I was going with PEP8: "Function names should be lowercase, with words
separated by underscores as necessary to improve readability." But
consistency with other functions in the lib (or specifically in the same
module) is probably better, yes.

-Chris




>
> --
> Marco Buttu
>
> INAF-Osservatorio Astronomico di Cagliari
> Via della Scienza n. 5, 09047 Selargius (CA)
> Phone: 070 711 80 217
> Email: mbuttu at oa-cagliari.inaf.it
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/43da9aeb/attachment.html>

From ethan at stoneleaf.us  Thu Feb  5 22:33:36 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Thu, 05 Feb 2015 13:33:36 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxEKgMG8WNuPXMHg+OEbj6tRsapi1McTRWVMmiEX=PiwD+w@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
 <54D3CF43.70600@oa-cagliari.inaf.it>
 <CALGmxEKgMG8WNuPXMHg+OEbj6tRsapi1McTRWVMmiEX=PiwD+w@mail.gmail.com>
Message-ID: <54D3E1B0.2030706@stoneleaf.us>

On 02/05/2015 01:03 PM, Chris Barker wrote:
> On Thu, Feb 5, 2015 at 12:14 PM, Marco Buttu <marco.buttu at gmail.com <mailto:marco.buttu at gmail.com>> wrote:
> 
>     On 05/02/2015 18:56, Guido van Rossum wrote:
> 
>         - Which module should this go into? (I think math, but I don't think you state it.)
> 
> 
>     In that case it should be isclose(), for consistency with the other math.is <http://math.is>* functions. But
>     wherever we put it, I would suggest to change to isclose() because in the standard library we usually have
>     issomething() instead of is_something().
> 
> 
> I was going with PEP8: "Function names should be lowercase, with words separated by underscores as necessary to improve
> readability." But consistency with other functions in the lib (or specifically in the same module) is probably better, yes.

PEP8 on consistency [1] (second paragraph):

   A style guide is about consistency. Consistency with this style guide is
   important. Consistency within a project is more important. Consistency
   within one module or function is most important.

[1]  https://www.python.org/dev/peps/pep-0008/#id10

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/98848500/attachment.sig>

From mistersheik at gmail.com  Wed Feb  4 22:14:14 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Wed, 4 Feb 2015 13:14:14 -0800 (PST)
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
Message-ID: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>

Hundreds of people want an orderedset 
(http://stackoverflow.com/questions/1653970/does-python-have-an-ordered-set) 
and yet the ordered sets that are in PyPI (oset, ordered-set) are 
incomplete.  Please consider adding an ordered set that has *all* of the 
same functionality as set.

The only major design decision is (I think) whether to store the elements 
in a linked list or dynamic vector.

My personal preference is to specify the ordered set via a flag to the 
constructor, which can be intercepted by __new__ to return a different 
object type as necessary.

(If anyone wants to add a complete ordered set to PyPI that would also be 
very useful for me!)

Best,

Neil
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150204/821164e3/attachment.html>

From chris.barker at noaa.gov  Thu Feb  5 22:59:42 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Thu, 5 Feb 2015 13:59:42 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <54D3E1B0.2030706@stoneleaf.us>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
 <54D3CF43.70600@oa-cagliari.inaf.it>
 <CALGmxEKgMG8WNuPXMHg+OEbj6tRsapi1McTRWVMmiEX=PiwD+w@mail.gmail.com>
 <54D3E1B0.2030706@stoneleaf.us>
Message-ID: <CALGmxEL=6HtWyJoW64yV+6R2pradBq2b8mLfWWOwkTdB=6q+rg@mail.gmail.com>

On Thu, Feb 5, 2015 at 1:33 PM, Ethan Furman <ethan at stoneleaf.us> wrote:


> > I was going with PEP8: "Function names should be lowercase, with words
> separated by underscores as necessary to improve
> > readability." But consistency with other functions in the lib (or
> specifically in the same module) is probably better, yes.
>
> PEP8 on consistency [1] (second paragraph):
>
>    A style guide is about consistency. Consistency with this style guide is
>    important. Consistency within a project is more important. Consistency
>    within one module or function is most important.
>

Exactly -- so isclose() is the way to go.

-Chris





> [1]  https://www.python.org/dev/peps/pep-0008/#id10
>
> --
> ~Ethan~
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/b6ae46b6/attachment-0001.html>

From chris.barker at noaa.gov  Thu Feb  5 23:02:01 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Thu, 5 Feb 2015 14:02:01 -0800
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
In-Reply-To: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>
References: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>
Message-ID: <CALGmxEKJje9qLiN+jP-iK0c-G-Q3n-rRGCvdqriDqVX-rEWcgw@mail.gmail.com>

Interesting -- somehow this message came through on the google group mirror
of  python-ideas only -- so replying to it barfed.

He's my reply again, with the proper address this time:

On Wed, Feb 4, 2015 at 1:14 PM, Neil Girdhar <mistersheik at gmail.com> wrote:

> Hundreds of people want an orderedset (
> http://stackoverflow.com/questions/1653970/does-python-have-an-ordered-set)
> and yet the ordered sets that are in PyPI (oset, ordered-set) are
> incomplete.  Please consider adding an ordered set that has *all* of the
> same functionality as set.
>


> (If anyone wants to add a complete ordered set to PyPI that would also be
> very useful for me!)
>

It seems like a good way to go here is to contribute to one of the projects
already on PyPI -- and once you like it, propose that it be added to the
standard library.

And can you leverage the OrderedDict implementation already in the std lib?
Or maybe better, the new one in PyPy?

-Chris



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/91971b40/attachment.html>

From amauryfa at gmail.com  Thu Feb  5 23:04:49 2015
From: amauryfa at gmail.com (Amaury Forgeot d'Arc)
Date: Thu, 5 Feb 2015 23:04:49 +0100
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
In-Reply-To: <CALGmxEKJje9qLiN+jP-iK0c-G-Q3n-rRGCvdqriDqVX-rEWcgw@mail.gmail.com>
References: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>
 <CALGmxEKJje9qLiN+jP-iK0c-G-Q3n-rRGCvdqriDqVX-rEWcgw@mail.gmail.com>
Message-ID: <CAGmFidZsd9JfQb3W9cRw-cob4pWT8y-HBMu_z0zwknxEQsCwDw@mail.gmail.com>

2015-02-05 23:02 GMT+01:00 Chris Barker <chris.barker at noaa.gov>:

> Interesting -- somehow this message came through on the google group
> mirror of  python-ideas only -- so replying to it barfed.
>
> He's my reply again, with the proper address this time:
>
> On Wed, Feb 4, 2015 at 1:14 PM, Neil Girdhar <mistersheik at gmail.com>
>  wrote:
>
>> Hundreds of people want an orderedset (
>> http://stackoverflow.com/questions/1653970/does-python-have-an-ordered-set)
>> and yet the ordered sets that are in PyPI (oset, ordered-set) are
>> incomplete.  Please consider adding an ordered set that has *all* of the
>> same functionality as set.
>>
>
>
>> (If anyone wants to add a complete ordered set to PyPI that would also be
>> very useful for me!)
>>
>
> It seems like a good way to go here is to contribute to one of the
> projects already on PyPI -- and once you like it, propose that it be added
> to the standard library.
>
> And can you leverage the OrderedDict implementation already in the std
> lib? Or maybe better, the new one in PyPy?
>

PyPy dicts and sets (the built-in ones) are already ordered by default.

-- 
Amaury Forgeot d'Arc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/0ba75a97/attachment.html>

From chris.barker at noaa.gov  Thu Feb  5 23:07:00 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Thu, 5 Feb 2015 14:07:00 -0800
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
In-Reply-To: <CAGmFidZsd9JfQb3W9cRw-cob4pWT8y-HBMu_z0zwknxEQsCwDw@mail.gmail.com>
References: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>
 <CALGmxEKJje9qLiN+jP-iK0c-G-Q3n-rRGCvdqriDqVX-rEWcgw@mail.gmail.com>
 <CAGmFidZsd9JfQb3W9cRw-cob4pWT8y-HBMu_z0zwknxEQsCwDw@mail.gmail.com>
Message-ID: <CALGmxEJ75MrMXWXRQ3uAWWbNrjSUbLKJh5+6ffocSShG9EEpBw@mail.gmail.com>

On Thu, Feb 5, 2015 at 2:04 PM, Amaury Forgeot d'Arc <amauryfa at gmail.com>
wrote:

> And can you leverage the OrderedDict implementation already in the std
>> lib? Or maybe better, the new one in PyPy?
>>
>
> PyPy dicts and sets (the built-in ones) are already ordered by default.
>

Exactly ;-)

-Chris





-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/108c9126/attachment.html>

From marco.buttu at gmail.com  Thu Feb  5 23:36:50 2015
From: marco.buttu at gmail.com (Marco Buttu)
Date: Thu, 05 Feb 2015 23:36:50 +0100
Subject: [Python-ideas] [PEP8] Predicate consistency
Message-ID: <54D3F082.8080803@oa-cagliari.inaf.it>

Hi all, the PEP 8 says that function and method names should be 
lowercase, with words separated by underscores as necessary to improve 
readability. Maybe it could be a good idea to specify in the PEP8 that 
the predicates should not be separated by underscores, because it is 
pretty annoying to see this kind of inconsistency:

     str.isdigit()
     float.is_integer()

-- 
Marco Buttu

INAF-Osservatorio Astronomico di Cagliari
Via della Scienza n. 5, 09047 Selargius (CA)
Phone: 070 711 80 217
Email: mbuttu at oa-cagliari.inaf.it


From abarnert at yahoo.com  Thu Feb  5 23:46:21 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Thu, 5 Feb 2015 22:46:21 +0000 (UTC)
Subject: [Python-ideas] PEP 485: A Function for testing
	approximate	equality
In-Reply-To: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
Message-ID: <166808118.305191.1423176381768.JavaMail.yahoo@mail.yahoo.com>

First, +0.5 overall. The only real downside is that it takes math further away from C math.h, which is not much of a downside.

And now, the nits:

On Thursday, February 5, 2015 9:14 AM, Chris Barker <chris.barker at noaa.gov> wrote:

>Non-float types
>---------------
>
>The primary use-case is expected to be floating point numbers.
>However, users may want to compare other numeric types similarly. In
>theory, it should work for any type that supports ``abs()``,
>comparisons, and subtraction.  The code will be written and tested to

>accommodate these types:

Surely the type also has to support multiplication (or division, if you choose to implement it that way) as well, right? 

Also, are you sure your implementation doesn't need any explicit isinf/isnan checks? You mentioned something about cmath.isnan for complex numbers.

Also, what types like datetime, which support subtraction, but the result of subtraction is a different type, and only the latter supports abs and multiplication? An isclose method can be written that handles those types properly, but one can also be written that doesn't. I don't think it's important to support it (IIRC, numpy.isclose doesn't work on datetimes...), but it might be worth mentioning that, as a consequence of the exact rule being used, datetimes won't work (possibly even saying "just as with np.isclose", if I'm remembering right).
>unittest assertion
>-------------------


Presumably this is just copying assertAlmostEqual and replacing the element comparisons with a call to isclose, passing the params along?


>How much difference does it make?

>---------------------------------

This section relies on transformations that are valid for reals but not for floats. In particular:

>  delta = tol * (a-b)
>
>or::
>
>  delta / tol = (a-b)


If a and b are subnormals, the multiplication could underflow to 0, but the division won't; if delta is a very large number, the division could overflow to inf but the multiplication won't. (I'm assuming delta < 1.0, of course.) Your later substitution has the same issue. I think your ultimate conclusion is right, but your proof doesn't really prove it unless you take the extreme cases into account.
>The case that uses the arithmetic mean of the two values requires that
>the value be either added together before dividing by 2, which could
>result in extra overflow to inf for very large numbers, or require
>each value to be divided by two before being added together, which

>could result in underflow to -inf for very small numbers.

Dividing by two results in underflow to 0, not -inf. (Adding before dividing can result in overflow to -inf just as it can to inf, of course, but I don't think that needs mentioning.) 

Also, dividing by two only results in underflow to 0 for two numbers, +/-5e-324, and I don't think there's any case where that underflow can cause a problem where you wouldn't already underflow to 0 unless tol >= 1.0, so I'm not sure this is actually a problem to worry about. That being said, your final conclusion is again hard to argue with: the benefit is so small it's not really worth _any_ significant cost, even the cost of having to explain why there's no cost. :)

>Relative Tolerance Default
>--------------------------


This section depends on the fact that a Python float has about 1e-16 precision. But technically, that isn't true; it has about sys.float_info.epsilon precision, which is up to the implementation; CPython leaves it up to the C implementation's double type, and C89 leaves that up to the platform. That's why sys.float_info.epsilon is available in the first place--because you can't know it a priori.


In practice, I don't know of any modern platforms that don't use IEEE double or something close enough, so it's always going to be 2.2e-16-ish. And I don't think you want the default tolerance to be platform-dependent. And I don't think you want to put qualifiers in all the half-dozen places you mention the 1e-9. I just think it's worth mentioning in a parenthetical somewhere in this paragraph, like maybe:

>The relative tolerance required for two values to be considered
>"close" is entirely use-case dependent. Nevertheless, the relative
>tolerance needs to be less than 1.0, and greater than 1e-16

>(approximate precision of a python float *on almost all platforms*).
>Absolute tolerance default
>--------------------------
>
>The absolute tolerance value will be used primarily for comparing to
>zero. The absolute tolerance required to determine if a value is
>"close" to zero is entirely use-case dependent. There is also
>essentially no bounds to the useful range -- expected values would
>conceivably be anywhere within the limits of a python float.  Thus a

>default of 0.0 is selected.

Does that default work properly for the other supported types? Or do you need to use the default-constructed or zero-constructed value for some appropriate type? (Sorry for C++ terminology, but you know what I mean--and the C++ rules do work for these Python types.) Maybe it's better to just specify "zero" and leave it up to the implementation to do the trivial thing if possible or something more complicated if necessary, rather than specifying that it's definitely the float 0.0?

>Expected Uses
>=============
>
>The primary expected use case is various forms of testing -- "are the
>results computed near what I expect as a result?" This sort of test
>may or may not be part of a formal unit testing suite. Such testing
>could be used one-off at the command line, in an iPython notebook,
>part of doctests, or simple assets in an ``if __name__ == "__main__"``
>block.


Typo: "...or simple *asserts* in an..."

>The proposed unitest.TestCase assertion would have course be used in

>unit testing.

Typo: "...of course...".

From abarnert at yahoo.com  Thu Feb  5 23:49:11 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Thu, 5 Feb 2015 22:49:11 +0000 (UTC)
Subject: [Python-ideas] PEP 485: A Function for testing
	approximate	equality
In-Reply-To: <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>
References: <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>
Message-ID: <1680611564.312707.1423176551740.JavaMail.yahoo@mail.yahoo.com>

On Thursday, February 5, 2015 11:47 AM, Chris Barker <chris.barker at noaa.gov> wrote:

>- For unittests are you proposing to replace assertAlmostEquals or deprecate that and add a new assertIsCloseTo? (I hope the latter.
>

>Add an assertIsCloseTo -- whether assertAlmostEquals should be deprecated I have no opinion on. Though it is useful, as it acts over a sequence -- I"m guessing fols will want to keep it.

Why not have assertIsCloseTo act over a sequence in exactly the same way as assertAlmostEquals, in which case there's no reason to keep the latter (except for existing unit tests that already use it, but presumably a long period of deprecation is sufficient there)?

From njs at pobox.com  Thu Feb  5 23:59:28 2015
From: njs at pobox.com (Nathaniel Smith)
Date: Thu, 5 Feb 2015 14:59:28 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
 <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>
Message-ID: <CAPJVwBkwaSRa961uMYkYihrpXSaFDgTLRkWCciYZaV9hKozLjw@mail.gmail.com>

On Thu, Feb 5, 2015 at 11:39 AM, Chris Barker <chris.barker at noaa.gov> wrote:
> Add an assertIsCloseTo -- whether assertAlmostEquals should be deprecated I
> have no opinion on. Though it is useful, as it acts over a sequence [...]

Are you sure? I don't see any special handling of sequences in its
docs or code...

-n

-- 
Nathaniel J. Smith -- http://vorpus.org

From solipsis at pitrou.net  Fri Feb  6 00:04:38 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 6 Feb 2015 00:04:38 +0100
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
Message-ID: <20150206000438.1adeb478@fsol>

On Wed, 4 Feb 2015 16:48:41 -0800
Chris Barker <chris.barker at noaa.gov> wrote:
> Hi folks,
> 
> Time for Take 2 (or is that take 200?)
> 
> No, I haven't dropped this ;-) After the lengthy thread, I went through and
> tried to take into account all the thoughts and comments. The result is a
> PEP with a bunch more explanation, and some slightly different decisions.

I think it is a nuisance that this PEP doesn't work at zero by default.
If this is meant to ease the life of non-specialist users then it
should do the right thing out of the box, IMO.

Regards

Antoine.



From guido at python.org  Fri Feb  6 00:05:59 2015
From: guido at python.org (Guido van Rossum)
Date: Thu, 5 Feb 2015 15:05:59 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CAPJVwBkwaSRa961uMYkYihrpXSaFDgTLRkWCciYZaV9hKozLjw@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
 <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>
 <CAPJVwBkwaSRa961uMYkYihrpXSaFDgTLRkWCciYZaV9hKozLjw@mail.gmail.com>
Message-ID: <CAP7+vJ+uoSD5L4UOwFhX_qBDvJv6qEmK2F=DztZ2LWWsKzCCoA@mail.gmail.com>

On Thu, Feb 5, 2015 at 2:59 PM, Nathaniel Smith <njs at pobox.com> wrote:

> On Thu, Feb 5, 2015 at 11:39 AM, Chris Barker <chris.barker at noaa.gov>
> wrote:
> > Add an assertIsCloseTo -- whether assertAlmostEquals should be
> deprecated I
> > have no opinion on. Though it is useful, as it acts over a sequence [...]
>
> Are you sure? I don't see any special handling of sequences in its
> docs or code...
>

The one in unittest doesn't have this feature. However there's
assertApproxEqual defined locally in the test/test_statistics.py package
that does. Since the other is only meant for local use in
test_statistics.py, we should leave it alone. The one in unittest is pretty
bad but widely used; deprecating it won't have any teeth.

-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/655b9e61/attachment.html>

From steve at pearwood.info  Fri Feb  6 01:10:30 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 6 Feb 2015 11:10:30 +1100
Subject: [Python-ideas] Add orderedset as set(iterable, *,
	ordered=False) and similarly for frozenset.
In-Reply-To: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>
References: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>
Message-ID: <20150206001029.GZ2498@ando.pearwood.info>

On Wed, Feb 04, 2015 at 01:14:14PM -0800, Neil Girdhar wrote:
> Hundreds of people want an orderedset 
> (http://stackoverflow.com/questions/1653970/does-python-have-an-ordered-set) 
> and yet the ordered sets that are in PyPI (oset, ordered-set) are 
> incomplete.  Please consider adding an ordered set that has *all* of the 
> same functionality as set.

I think you have this backwards: more likely, first you find a complete 
implementation on PyPI, then you propose it for the standard library.


> The only major design decision is (I think) whether to store the elements 
> in a linked list or dynamic vector.

That's an implementation detail and therefore irrelevant (in the sense 
that it is subject to change without affecting code that uses ordered 
sets).

I think there are plenty of other major design decisions to be made. 
What's the union between a regular set and an ordered set? Is there a 
frozen ordered set? Do ordered sets compare unequal if they differ only 
in order (like lists)?


> My personal preference is to specify the ordered set via a flag to the 
> constructor, which can be intercepted by __new__ to return a different 
> object type as necessary.

-1 on a flag to the constructor.

I don't know if this rule (well, guideline) has a formal name anywhere, 
so I'm going to call it Guido's Rule: no constant mode arguments. I 
think this is a powerful design principle.

Functions or methods (including constructors) should not take arguments 
which, when given, are invariably supplied as constants. This especially 
goes for "mode" arguments, usually a flag, where the implemention of the 
function then looks something like this:

def thing(arg, flag=True):
    if flag:
        return this(arg)
    else:
        return that(arg)

In cases like this, you should expose this and that as distinct 
functions.

(If somebody -- Guido? -- would like to propose a cleaner explanation 
for the rule, I suggest we put it somewhere in the documention. Perhaps 
in the FAQs.)

Examples:

(1) We have list and tuple as separate functions, not sequences(*args, 
*, mutable=False).

(2) str and unicode (str and bytes), not str(obj, unicode=True).

(3) pairs of (sin, sin), (cos, acos), (tan, atan) functions, rather than 
sin(x, inverse=False). For that matter, same with sin/sinh etc.


Note that this is a "should not", not "must not": exceptions are allowed 
where the API clearly improved by it:

sorted(seq, reversed=True) rather than sorted/reverse_sorted.


> (If anyone wants to add a complete ordered set to PyPI that would also be 
> very useful for me!)

I'm sure it would be very useful to lots of people. Perhaps you could 
scratch your own itch and solve their problem at the same time?


-- 
Steve

From chris.barker at noaa.gov  Fri Feb  6 01:11:05 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Thu, 5 Feb 2015 16:11:05 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <1680611564.312707.1423176551740.JavaMail.yahoo@mail.yahoo.com>
References: <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>
 <1680611564.312707.1423176551740.JavaMail.yahoo@mail.yahoo.com>
Message-ID: <CALGmxEKdjuBPYXCz8GoP9VDN1zPP+2YdyDXESv6AiFT2i=-=uA@mail.gmail.com>

On Thu, Feb 5, 2015 at 2:49 PM, Andrew Barnert <abarnert at yahoo.com> wrote:
>
> >- For unittests are you proposing to replace assertAlmostEquals or
> deprecate that and add a new assertIsCloseTo? (I hope the latter.
> >
>
> >Add an assertIsCloseTo -- whether assertAlmostEquals should be deprecated
> I have no opinion on. Though it is useful, as it acts over a sequence --
> I"m guessing fols will want to keep it.
>
> Why not have assertIsCloseTo act over a sequence in exactly the same way
> as assertAlmostEquals, in which case there's no reason to keep the latter
> (except for existing unit tests that already use it, but presumably a long
> period of deprecation is sufficient there)?
>

Because assertAlmostEquals() is an absolute tolerance test -- not a
relative one -- and it lets (encourages?) users to specify number of
decimal places (or optionally install a explicit absolute tolerance). I
assume this is all there because someone found it useful.

Again, I have little opinion about all this -- I don't like/use unitest
anyway. I"m hoping someone else will take up the mantle of what to do with
unittest...

-Chris



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/f90a8fb3/attachment-0001.html>

From barry at python.org  Fri Feb  6 01:12:23 2015
From: barry at python.org (Barry Warsaw)
Date: Thu, 5 Feb 2015 19:12:23 -0500
Subject: [Python-ideas] [PEP8] Predicate consistency
References: <54D3F082.8080803@oa-cagliari.inaf.it>
Message-ID: <20150205191223.45837f66@anarchist.wooz.org>

On Feb 05, 2015, at 11:36 PM, Marco Buttu wrote:

>Hi all, the PEP 8 says that function and method names should be lowercase,
>with words separated by underscores as necessary to improve
>readability. Maybe it could be a good idea to specify in the PEP8 that the
>predicates should not be separated by underscores, because it is pretty
>annoying to see this kind of inconsistency:
>
>     str.isdigit()
>     float.is_integer()

I don't think PEP 8 should change.  Even words in predicates should be
separated by an underscore.  But it's not worth it to go back and change old
APIs that have been in existence forever.

Cheers,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/f43565f2/attachment.sig>

From chris.barker at noaa.gov  Fri Feb  6 01:12:30 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Thu, 5 Feb 2015 16:12:30 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CAP7+vJ+uoSD5L4UOwFhX_qBDvJv6qEmK2F=DztZ2LWWsKzCCoA@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
 <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>
 <CAPJVwBkwaSRa961uMYkYihrpXSaFDgTLRkWCciYZaV9hKozLjw@mail.gmail.com>
 <CAP7+vJ+uoSD5L4UOwFhX_qBDvJv6qEmK2F=DztZ2LWWsKzCCoA@mail.gmail.com>
Message-ID: <CALGmxE+MkCEu7dH4scbkKp7-f1dhM0XwFPUmpXbxWFvqK=UcWw@mail.gmail.com>

On Thu, Feb 5, 2015 at 3:05 PM, Guido van Rossum <guido at python.org> wrote:

> Are you sure? I don't see any special handling of sequences in its
>> docs or code...
>>
>
> The one in unittest doesn't have this feature. However there's
> assertApproxEqual defined locally in the test/test_statistics.py package
> that does.
>

Indeed -- it looks like I confused those two -- sorry about that.

-Chris


> Since the other is only meant for local use in test_statistics.py, we
> should leave it alone. The one in unittest is pretty bad but widely used;
> deprecating it won't have any teeth.
>
> --
> --Guido van Rossum (python.org/~guido)
>



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/6b965e43/attachment.html>

From chris.barker at noaa.gov  Fri Feb  6 01:24:21 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Thu, 5 Feb 2015 16:24:21 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <20150206000438.1adeb478@fsol>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
Message-ID: <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>

On Thu, Feb 5, 2015 at 3:04 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:


> I think it is a nuisance that this PEP doesn't work at zero by default.
>

I think this is the biggest point of contention -- it' s a hard problem.

If this is meant to ease the life of non-specialist users then it
> should do the right thing out of the box, IMO.
>

That's the trick -- there simply IS NO "right thing" to be done out of the
box. A relative comparison to zero is mathematically and computationally
impossible -- you need an absolute comparison. And to set a default for
that you need to know the general magnitude of the values your user is
going to have -- and we can't know that.

If we really believe that almost all people will be working with numbers of
around a magnitude of 1, then we could set a default -- by why in the world
are you using floating point numbers with a range of about 1e-300 to 1e300
in that case?

NOTE: numpy.allclose() uses an absolute tolerance default of 1e-08 -- so
someone thought that was a good idea. But as I've been thinking about all
this, I decided it was very dangerous. In fact, I just did a quick grep of
the unit tests in my current code base: 159 uses of allclose, and almost
none with an absolute tolerance specified -- now I need to go and check
those, most of them probably need a more carefully thought out value.

I'm (and my team) are just one careless programmer, I guess, but I don't
think I'm the only one that tends to use defaults, and only go back and
think about if a test fails. And as you say, the "non-specialist users" are
the most likely to be careless.

-Chris




-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/f624b54e/attachment.html>

From guido at python.org  Fri Feb  6 01:25:20 2015
From: guido at python.org (Guido van Rossum)
Date: Thu, 5 Feb 2015 16:25:20 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxEKdjuBPYXCz8GoP9VDN1zPP+2YdyDXESv6AiFT2i=-=uA@mail.gmail.com>
References: <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>
 <1680611564.312707.1423176551740.JavaMail.yahoo@mail.yahoo.com>
 <CALGmxEKdjuBPYXCz8GoP9VDN1zPP+2YdyDXESv6AiFT2i=-=uA@mail.gmail.com>
Message-ID: <CAP7+vJ+bhnee_TKYPhQ_=OB5xKMy7gDbOYKXuZfFeABUKWBcdw@mail.gmail.com>

On Thu, Feb 5, 2015 at 4:11 PM, Chris Barker <chris.barker at noaa.gov> wrote:

>
> On Thu, Feb 5, 2015 at 2:49 PM, Andrew Barnert <abarnert at yahoo.com> wrote:
>>
>> >- For unittests are you proposing to replace assertAlmostEquals or
>> deprecate that and add a new assertIsCloseTo? (I hope the latter.
>> >
>>
>> >Add an assertIsCloseTo -- whether assertAlmostEquals should be
>> deprecated I have no opinion on. Though it is useful, as it acts over a
>> sequence -- I"m guessing fols will want to keep it.
>>
>> Why not have assertIsCloseTo act over a sequence in exactly the same way
>> as assertAlmostEquals, in which case there's no reason to keep the latter
>> (except for existing unit tests that already use it, but presumably a long
>> period of deprecation is sufficient there)?
>>
>
> Because assertAlmostEquals() is an absolute tolerance test -- not a
> relative one -- and it lets (encourages?) users to specify number of
> decimal places (or optionally install a explicit absolute tolerance). I
> assume this is all there because someone found it useful.
>
> Again, I have little opinion about all this -- I don't like/use unitest
> anyway. I"m hoping someone else will take up the mantle of what to do with
> unittest...
>

OK, let's just drop the idea of adding anything to unittest/case.py from
the PEP. People can just write self.assertTrue(math.isclose(...)). Or if
they're using py.test they can write assert math.isclose(...).

-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/40b19c7e/attachment-0001.html>

From rosuav at gmail.com  Fri Feb  6 01:40:57 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Fri, 6 Feb 2015 11:40:57 +1100
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
Message-ID: <CAPTjJmp8moipNvsXL=buJxZaZPAE8sAUCud1aGWerEMGZtxVog@mail.gmail.com>

On Fri, Feb 6, 2015 at 11:24 AM, Chris Barker <chris.barker at noaa.gov> wrote:
> If we really believe that almost all people will be working with numbers of
> around a magnitude of 1, then we could set a default -- by why in the world
> are you using floating point numbers with a range of about 1e-300 to 1e300
> in that case?

Because 'float' is the one obvious way to handle non-integral numbers
in Python. If you simply put "x = 1.1" in your code, you get a float,
not a decimal.Decimal, not a fractions.Fraction, and not a complex. We
don't need separate data types for "numbers around 1" and "numbers
around 1e100"... not for the built-in float type, anyway. To the
greatest extent possible, stuff should "just work".

Unfortunately the abstraction will always leak, especially when you
work with numbers > 2**53 or < 2**-53, but certainly there's nothing
wrong with using a data type that gives you *more* range. In theory, a
future version of Python should be able to switch to IEEE 754 Quad
Precision and push the boundaries out, without breaking anyone's code.
Then people will be using a data type with a range of something like
1e-5000 to 1e5000... to store values close to 1. Ain't computing
awesome! (Obligatory XKCD link: http://xkcd.com/676/ )

ChrisA

From steve at pearwood.info  Fri Feb  6 01:44:07 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 6 Feb 2015 11:44:07 +1100
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
 <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>
Message-ID: <20150206004406.GB2498@ando.pearwood.info>

On Thu, Feb 05, 2015 at 11:39:43AM -0800, Chris Barker wrote:

> > - What if a tolerance is inf or nan? What if all args are inf or nan?
> 
> 0.0 < rel_tol < 1.0)

I can just about see the point in restricting rel_tol to the closed 
interval 0...1, but not the open interval. Mathematically, setting the tolerance to 0 
should just degrade gracefully to exact equality, and a tolerance of 
1 is nothing special at all. 

Values larger than 1 aren't often useful, but there really is no reason 
to exclude tolerances larger than 1. "Give or take 300%" (ie. 
rel_tol=3.0) is a pretty big tolerance, but it is well-defined: a 
difference of 299% is "close enough", 301% is "too far".

You might argue that if you want exact equality, you shouldn't use a 
tolerance at all but just use == instead. But consider a case where 
you might be trying calculations repeatedly and decreasing the tolerance 
until it fails:

# doesn't handle termination properly
rel_tol = pick_some_number
while close_to(something, something_else, rel_tol):
    rel_tol /= 2
    
In some cases, your calculation may actually be exact, and it simplies 
the code if close_to(a, b, rel_tol) degrades nicely to handle the exact 
case rather than you needing to switch between close_to and == by hand.

Negative error tolerances, on the other hand, do seem to be meaningless 
and should be prevented.


> I've enforced this in the sample code -- it's not strictly necessary, but
> what does a negative relative tolerance mean, anyway, and a tolerance over
> 1.0 has some odd behaviour with zero and could hardly be called "close"
> anyway. Text added to make this clear.

I don't see why a tolerance over 1 should behave any more oddly with 
zero than a tolerance under 1.

And as for "close", some people would argue that rel_tol=0.5 is "hardly 
close". Sometimes you might accept anything within an order of magnitude 
the expected value: anything up to 10*x is "close enough". Or 20 times. 
Who knows? 

(E.g. "guess the number of grains of sand on this beach".) Any upper 
limit you put in is completely arbitrary, and this function shouldn't be 
in the business of telling people how much error they are allowed to 
tolerate. It's easy for people to put in their own restrictions, but 
hard for them to work around yours.


> if a and/or b are inf or nan, it follows IEE754 (which mostly "just works"
> with no special casing in the code.

That is very nice.



-- 
Steve

From chris.barker at noaa.gov  Fri Feb  6 01:53:19 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Thu, 5 Feb 2015 16:53:19 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <166808118.305191.1423176381768.JavaMail.yahoo@mail.yahoo.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <166808118.305191.1423176381768.JavaMail.yahoo@mail.yahoo.com>
Message-ID: <CALGmxEJHrVR7PauAL95y3bkS_CwbxOrPbW4rH1UQFkdK9XYqbw@mail.gmail.com>

On Thu, Feb 5, 2015 at 2:46 PM, Andrew Barnert <abarnert at yahoo.com> wrote:

> >Non-float types
> >---------------
> >
> >The primary use-case is expected to be floating point numbers.
> >However, users may want to compare other numeric types similarly. In
> >theory, it should work for any type that supports ``abs()``,
> >comparisons, and subtraction.  The code will be written and tested to
>
> >accommodate these types:
>
> Surely the type also has to support multiplication (or division, if you
> choose to implement it that way) as well, right?
>

yes, multiplication, not sure how I  missed that. No division necessary.


> Also, are you sure your implementation doesn't need any explicit
> isinf/isnan checks? You mentioned something about cmath.isnan for complex
> numbers.
>

yeah, there's that. Though it seems to work fine for Fraction and Decimal
so far -- I'm taking a TDD approach -- if it works with a test, I don't
worry about it ;-)

This brings up a bigger issue, that I'm pretty nuetral on:

What to do about duck-typing and various types it may get called with? My
intent was to count of duck typing completely:

1) I'll test with the types I mentioned in the PEP (I need a few more tests
to be rigorous)

2) For any other type, it may work, it may not, and hopefully users will
get an Exception, rather than a spurious result if it doesn't.

I can't possible test with all types, so what else to do?

Would datetime.timedetla be worth explicitly supporting? the concept of a
two datetimes being relatively close doesn't make any sense -- relative to
what???? And abs() fails for datetime already. I'll check timedelta -- it
should work, I  think -- abs() does anyway.

I'm a bit on the fence as to whether to do type dispatch for Decimal, as
well -- there are some oddities that may require it.


> Presumably this is just copying assertAlmostEqual and replacing the
> element comparisons with a call to isclose, passing the params along?


yup -- but worth it? Maybe not.


> >How much difference does it make?
>
> >---------------------------------
>
> This section relies on transformations that are valid for reals but not
> for floats. In particular:


does this effect the result? I also assumed a and b are positive and a>b,
just to make all the math easier (and easier to write). I'm not really
trying to do a proper rigorous formal proof here. And it does check out
computationally -- at least with comparing to 1.0, there is no floating
point value that passes some of the test, but not others with a rel_tol of
1-e9 (there is one value that does for 1e-8)

>  delta = tol * (a-b)
> >
> >or::
> >
> >  delta / tol = (a-b)
>

If a and b are subnormals, the multiplication could underflow to 0, but the
> division won't;


there is no division in the final computation anyway -- I only understand
enough FP to barely pass a class with Kahan, but I think it's OK to do the
derivation with real numbers, and then check the FP consequences with the
actual calculation in this case.

if delta is a very large number, the division could overflow to inf but the
> multiplication won't. (I'm assuming delta < 1.0, of course.) Your later
> substitution has the same issue. I think your ultimate conclusion is right,
> but your proof doesn't really prove it unless you take the extreme cases
> into account.
>

I"m not sure all that needs to be in the PEP anyway -- it's jsut that thre
was a lot of discussion about the various methods, and I had hand-wavingly
said "it doesn't matter for small tolerances" -- so this is simply making
that point for formally.

>The case that uses the arithmetic mean of the two values requires that
> >the value be either added together before dividing by 2, which could
> >result in extra overflow to inf for very large numbers, or require
> >each value to be divided by two before being added together, which
>
> >could result in underflow to -inf for very small numbers.
>
> Dividing by two results in underflow to 0, not -inf.


oops, duh. thanks.


> Also, dividing by two only results in underflow to 0 for two numbers,
> +/-5e-324, and I don't think there's any case where that underflow can
> cause a problem where you wouldn't already underflow to 0 unless tol >=
> 1.0, so I'm not sure this is actually a problem to worry about.


no -- not really -- being a bit pedantic ;-)


> >Relative Tolerance Default
> >--------------------------
>
> This section depends on the fact that a Python float has about 1e-16
> precision.


indeed.


> But technically, that isn't true; it has about sys.float_info.epsilon
> precision, which is up to the implementation; CPython leaves it up to the C
> implementation's double type, and C89 leaves that up to the platform.
> That's why sys.float_info.epsilon is available in the first place--because
> you can't know it a priori.
>
> In practice, I don't know of any modern platforms that don't use IEEE
> double or something close enough, so it's always going to be 2.2e-16-ish.


that's what I was assuming -- but I guess there is always micropython --
does it have machine doubles? I assume the common mobile platfroms do --
but what do I know?


> And I don't think you want the default tolerance to be platform-dependent.
> And I don't think you want to put qualifiers in all the half-dozen places
> you mention the 1e-9. I just think it's worth mentioning in a parenthetical
> somewhere in this paragraph.
>

Good idea.


> >default of 0.0 is selected.
>
> Does that default work properly for the other supported types?


see above for the duck-typing question -- it does work for all the types
I've tested with.


>  Maybe it's better to just specify "zero" and leave it up to the
> implementation to do the trivial thing if possible or something more
> complicated if necessary, rather than specifying that it's definitely the
> float 0.0?
>

done in the PEP.

Thanks for the typo fixes.

-Chris


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/09e4182d/attachment-0001.html>

From chris.barker at noaa.gov  Fri Feb  6 01:56:58 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Thu, 5 Feb 2015 16:56:58 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CAPTjJmp8moipNvsXL=buJxZaZPAE8sAUCud1aGWerEMGZtxVog@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <CAPTjJmp8moipNvsXL=buJxZaZPAE8sAUCud1aGWerEMGZtxVog@mail.gmail.com>
Message-ID: <CALGmxEKE=DBBjerxaMcOSjxfSteWnaK0f=kzOe-_+Gb4hvy50w@mail.gmail.com>

On Thu, Feb 5, 2015 at 4:40 PM, Chris Angelico <rosuav at gmail.com> wrote:

> > If we really believe that almost all people will be working with numbers
> of
> > around a magnitude of 1, then we could set a default -- by why in the
> world
> > are you using floating point numbers with a range of about 1e-300 to
> 1e300
> > in that case?
>
> Because 'float' is the one obvious way to handle non-integral numbers
> in Python. If you simply put "x = 1.1" in your code, you get a float,
>

Fair enough -- more to the point -- we HAVE floats because we want folks to
be able to use a wide range of values -- once we have that, we can't count
on users generally only using a small part of that range.

-Chris


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/3ea39369/attachment.html>

From rosuav at gmail.com  Fri Feb  6 02:00:17 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Fri, 6 Feb 2015 12:00:17 +1100
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxEKE=DBBjerxaMcOSjxfSteWnaK0f=kzOe-_+Gb4hvy50w@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <CAPTjJmp8moipNvsXL=buJxZaZPAE8sAUCud1aGWerEMGZtxVog@mail.gmail.com>
 <CALGmxEKE=DBBjerxaMcOSjxfSteWnaK0f=kzOe-_+Gb4hvy50w@mail.gmail.com>
Message-ID: <CAPTjJmpTrniDPy9ZU243NoSoYHPoEbnf_QEpG4HTo+XozLMO_A@mail.gmail.com>

On Fri, Feb 6, 2015 at 11:56 AM, Chris Barker <chris.barker at noaa.gov> wrote:
> On Thu, Feb 5, 2015 at 4:40 PM, Chris Angelico <rosuav at gmail.com> wrote:
>>
>> > If we really believe that almost all people will be working with numbers
>> > of
>> > around a magnitude of 1, then we could set a default -- by why in the
>> > world
>> > are you using floating point numbers with a range of about 1e-300 to
>> > 1e300
>> > in that case?
>>
>> Because 'float' is the one obvious way to handle non-integral numbers
>> in Python. If you simply put "x = 1.1" in your code, you get a float,
>
>
> Fair enough -- more to the point -- we HAVE floats because we want folks to
> be able to use a wide range of values -- once we have that, we can't count
> on users generally only using a small part of that range.

Right. So crafting default tolerances for any particular "expected
range" isn't particularly safe.

+1 for the current behaviour of not specifying absolute tolerance.

ChrisA

From njs at pobox.com  Fri Feb  6 02:07:46 2015
From: njs at pobox.com (Nathaniel Smith)
Date: Thu, 5 Feb 2015 17:07:46 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
Message-ID: <CAPJVwBnRMKQTuKzUZhA49SaPbitJF1fQobhgHs487=w4NshYKQ@mail.gmail.com>

On Thu, Feb 5, 2015 at 4:24 PM, Chris Barker <chris.barker at noaa.gov> wrote:
> If we really believe that almost all people will be working with numbers of
> around a magnitude of 1, then we could set a default -- by why in the world
> are you using floating point numbers with a range of about 1e-300 to 1e300
> in that case?

It's an observable fact, though, that people generally do take care to
ensure that their numbers are within a few orders of magnitude of 1,
e.g. by choosing appropriate units, log-transforming, etc.

You could just as reasonably ask why in the world SI prefixes exist
when we have access to scientific notation. And I wouldn't know the
answer :-). But they do!

-n

-- 
Nathaniel J. Smith -- http://vorpus.org

From edk141 at gmail.com  Fri Feb  6 02:12:17 2015
From: edk141 at gmail.com (Ed Kellett)
Date: Fri, 06 Feb 2015 01:12:17 +0000
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
References: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>
 <20150206001029.GZ2498@ando.pearwood.info>
Message-ID: <CABmzr0jzrVFFJj+X0ciJO5gqOc5jTJyf2a_NLd0ofzYqPs7cvQ@mail.gmail.com>

On Fri Feb 06 2015 at 12:12:19 AM Steven D'Aprano <steve at pearwood.info>
wrote:
>
> I think there are plenty of other major design decisions to be made.
> What's the union between a regular set and an ordered set? Is there a
> frozen ordered set? Do ordered sets compare unequal if they differ only
> in order (like lists)?
>

Are there many options for the union question? I think the only sane choice
for any union involving ordered sets is an unordered set - in any other
case the resulting order would be wrong or at least not obviously right,
and that's almost certainly better expressed explicitly.

Ed Kellett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150206/9b045952/attachment.html>

From njs at pobox.com  Fri Feb  6 02:17:01 2015
From: njs at pobox.com (Nathaniel Smith)
Date: Thu, 5 Feb 2015 17:17:01 -0800
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
In-Reply-To: <CABmzr0jzrVFFJj+X0ciJO5gqOc5jTJyf2a_NLd0ofzYqPs7cvQ@mail.gmail.com>
References: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>
 <20150206001029.GZ2498@ando.pearwood.info>
 <CABmzr0jzrVFFJj+X0ciJO5gqOc5jTJyf2a_NLd0ofzYqPs7cvQ@mail.gmail.com>
Message-ID: <CAPJVwBkzsFFPT2ws0tVuMuowwsVp+sQZENJbSw4KjkWHDR7-NA@mail.gmail.com>

On Thu, Feb 5, 2015 at 5:12 PM, Ed Kellett <edk141 at gmail.com> wrote:
> On Fri Feb 06 2015 at 12:12:19 AM Steven D'Aprano <steve at pearwood.info>
> wrote:
>>
>> I think there are plenty of other major design decisions to be made.
>> What's the union between a regular set and an ordered set? Is there a
>> frozen ordered set? Do ordered sets compare unequal if they differ only
>> in order (like lists)?
>
> Are there many options for the union question? I think the only sane choice
> for any union involving ordered sets is an unordered set - in any other case
> the resulting order would be wrong or at least not obviously right, and
> that's almost certainly better expressed explicitly.

Surely a.union(b) should produce the same result as a.update(b) except
out-of-place, and .update() already has well-defined semantics for
OrderedDicts.

-n

-- 
Nathaniel J. Smith -- http://vorpus.org

From chris.barker at noaa.gov  Fri Feb  6 02:12:32 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Thu, 5 Feb 2015 17:12:32 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <20150206004406.GB2498@ando.pearwood.info>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
 <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>
 <20150206004406.GB2498@ando.pearwood.info>
Message-ID: <CALGmxEKhbiZcYgL5bTxJJ_e7=HwbSbZVPXzGfDXtDtn7tQFT-w@mail.gmail.com>

On Thu, Feb 5, 2015 at 4:44 PM, Steven D'Aprano <steve at pearwood.info> wrote:

> > 0.0 < rel_tol < 1.0)
>
> I can just about see the point in restricting rel_tol to the closed
> interval 0...1, but not the open interval. Mathematically, setting the
> tolerance to 0
> should just degrade gracefully to exact equality,


sure -- no harm done there.

and a tolerance of
> 1 is nothing special at all.
>

well, I ended up putting that in because it turns out with the "weak" test,
then anything compares as "close" to zero:

tol>=1.0
a = anything
b = 0.0

min( abs(a), abs(b) ) = 0.0

abs(a-b) = a

tol * a >= a

abs(a-b) <= tol * a

Granted, that's actually the best argument yet for using the strong test --
which I am suggesting, though I haven't thought out what that will do in
the case of large tolerances.


> Values larger than 1 aren't often useful, but there really is no reason
> to exclude tolerances larger than 1. "Give or take 300%" (ie.
> rel_tol=3.0) is a pretty big tolerance, but it is well-defined: a
> difference of 299% is "close enough", 301% is "too far".
>

yes it is, but then the whole weak vs string vs asymmetric test becomes
important. From my math the "delta" between the weak and strong tests goes
with  tolerance**2 * max(a,b).  So if the tolerance is >=1, then it makes a
big difference which test you choose. IN fact:

Is a within 300% of b makes sense, but "are a and b within 300% of
each-other" is poorly defined.

The goal of this is to provide an easy way for users to test if two values
are close - not to provide a general purpose relative difference function.
and keeping the scope small keeps things cleaner.

You might argue that if you want exact equality, you shouldn't use a
> tolerance at all but just use == instead. But consider a case where
> you might be trying calculations repeatedly and decreasing the tolerance
> until it fails:
>

sure -- then might as well include 0.0


> Negative error tolerances, on the other hand, do seem to be meaningless
> and should be prevented.


you could just take the abs(rel_tol), but really?  what's the point?


> I don't see why a tolerance over 1 should behave any more oddly with
> zero than a tolerance under 1.
>

see above.


> And as for "close", some people would argue that rel_tol=0.5 is "hardly
> close". Sometimes you might accept anything within an order of magnitude
> the expected value: anything up to 10*x is "close enough". Or 20 times.
> Who knows?
>

with some more thought, we may be able to let rel_tol be anything > 0.0
with the string test. I"ll have to investigate more if folks think this is
important.



> (E.g. "guess the number of grains of sand on this beach".) Any upper
> limit you put in is completely arbitrary,


somehow one doesn't feel arbitrary to me -- numbers aren't close if the
difference between them is larger than the largest of the numbers -- not
arbitrary, maybe unneccesary , but not arbirtrary




> and this function shouldn't be
> in the business of telling people how much error they are allowed to
> tolerate. It's easy for people to put in their own restrictions, but
> hard for them to work around yours.
>
>
> > if a and/or b are inf or nan, it follows IEE754 (which mostly "just
> works"
> > with no special casing in the code.
>
> That is very nice.
>
>
>
> --
> Steve
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/b64301d8/attachment-0001.html>

From Nikolaus at rath.org  Fri Feb  6 02:42:47 2015
From: Nikolaus at rath.org (Nikolaus Rath)
Date: Thu, 05 Feb 2015 17:42:47 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 (Chris Barker's message of "Thu, 5 Feb 2015 16:24:21 -0800")
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
Message-ID: <8761bfizmw.fsf@vostro.rath.org>

Chris Barker <chris.barker-32lpuo7BZBA at public.gmane.org> writes:
> On Thu, Feb 5, 2015 at 3:04 PM, Antoine Pitrou <solipsis-xNDA5Wrcr86sTnJN9+BGXg at public.gmane.org> wrote:
>> I think it is a nuisance that this PEP doesn't work at zero by
>> default.
>>
>
> I think this is the biggest point of contention -- it' s a hard
> problem.

I think the PEP does a very good job of explaining why a default
absolute tolerance does not make sense, and why 1e-8 is a reasonable
default for relative tolerance.

Until someone makes an at least equally good case for a non-zero default
absolute tolerance (and "too bad it doesn't work when testing against
zero" isn't one), there really isn't much to discuss (IHMO).

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             ?Time flies like an arrow, fruit flies like a Banana.?

From ron3200 at gmail.com  Fri Feb  6 04:53:49 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Thu, 05 Feb 2015 21:53:49 -0600
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <20150206004406.GB2498@ando.pearwood.info>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
 <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>
 <20150206004406.GB2498@ando.pearwood.info>
Message-ID: <mb1dse$he2$1@ger.gmane.org>



On 02/05/2015 06:44 PM, Steven D'Aprano wrote:
> On Thu, Feb 05, 2015 at 11:39:43AM -0800, Chris Barker wrote:
>
>>> > >- What if a tolerance is inf or nan? What if all args are inf or nan?
>> >
>> >0.0 < rel_tol < 1.0)
> I can just about see the point in restricting rel_tol to the closed
> interval 0...1, but not the open interval. Mathematically, setting the tolerance to 0
> should just degrade gracefully to exact equality, and a tolerance of
> 1 is nothing special at all.

I Agree.

> Values larger than 1 aren't often useful, but there really is no reason
> to exclude tolerances larger than 1. "Give or take 300%" (ie.
> rel_tol=3.0) is a pretty big tolerance, but it is well-defined: a
> difference of 299% is "close enough", 301% is "too far".

I agree here too.

The question I have is, is it better to get an excption, and possibly need 
to wrap the isclose() in a try-except, or better to let it go above 1?

If this is used in many places, I can see wrapping it in try-excepts as a 
bigger problem.  The same goes for when both tolerances are zero.


> You might argue that if you want exact equality, you shouldn't use a
> tolerance at all but just use == instead. But consider a case where
> you might be trying calculations repeatedly and decreasing the tolerance
> until it fails:
>
> # doesn't handle termination properly
> rel_tol = pick_some_number
> while close_to(something, something_else, rel_tol):
>      rel_tol /= 2
>
> In some cases, your calculation may actually be exact, and it simplies
> the code if close_to(a, b, rel_tol) degrades nicely to handle the exact
> case rather than you needing to switch between close_to and == by hand.

Yes.. +1

> Negative error tolerances, on the other hand, do seem to be meaningless
> and should be prevented.

No they are well behaved.  It's just a measurement from the other end of 
the board. ;-)

       t = .1   (10 - 10 * .1,  10 + 10 * .1)
                (9, 11)

       t = -.1   (10 - 10 * -.1,  10 + 10 * -.1)
                 (11, 9)

So you get the same range either way.  It works out in the isclose 
calculations just fine,  a negative tolerance would give the same result as 
the positive of the same value.

Again, is this a place where we would want an exception, and would we need 
to write it like isclose(a, b, rel_tolerance=abs(t)) any place where the 
tolerance value might go negative, or would we like it to be more forgiving 
and easier to use?


>> >I've enforced this in the sample code -- it's not strictly necessary, but
>> >what does a negative relative tolerance mean, anyway, and a tolerance over
>> >1.0 has some odd behaviour with zero and could hardly be called "close"
>> >anyway. Text added to make this clear.
> I don't see why a tolerance over 1 should behave any more oddly with
> zero than a tolerance under 1.
>
> And as for "close", some people would argue that rel_tol=0.5 is "hardly
> close". Sometimes you might accept anything within an order of magnitude
> the expected value: anything up to 10*x is "close enough". Or 20 times.
> Who knows?
>
> (E.g. "guess the number of grains of sand on this beach".) Any upper
> limit you put in is completely arbitrary, and this function shouldn't be
> in the business of telling people how much error they are allowed to
> tolerate. It's easy for people to put in their own restrictions, but
> hard for them to work around yours.

I agree with this idea in principle.  I think weather we want to handle 
exceptions or not is more of an issue though.

If it does allow higher tolerances, then it does need to handle them 
correctly.  I don't see that as a difficult problem though.

I prefer the weak version myself.  Because if you graph the result of True 
values for rel_tolerance=1.  You get a graph where all like signed numbers 
are close.   A tolerance of .5 gives a graph of the fifty percent of middle 
like signed numbers.  And you can think of it as a percentage of the larger 
value.  Which tends to be easier than thinking about percent increase.

Also this recommends using this method.

     http://c-faq.com/fp/fpequal.html



So for the weak method and increasing values of tolerance:

A graphs of all True values fill from the like signed diagonal and 
converges on the unlike signed diagonal when rel_tolerance=2.  Values above 
that don't change anything,  they are all still true.  So rel_tolerance=1 
gives True for all like signed numbers.  And all unlike signed number is 
the graph rel_tolerance=2 minus the graph of rel_tolerance=1.


For the strong method, and increasing values of tolerance:

The graphs fill from the like signed diagonal axis first until t=2, (the 
like signed quadrants are not filled yet), then it also starts filling from 
the unlike signed axis. So t=3 shows an X pattern, with a wider area in the 
like signed quadrants.  I'm not sure what the upper value of t is needed so 
all values are True?  It may be inf,

Keep in mind these graphs are the sum of comparing all the different values 
given a particular value of t.

So it there isn't a reason to limit the range of the tolerance, it will 
work just fine if we don't.

It's much harder to pick a strong tolerance that gives some sort of 
specific behaviour. For that reason I prefer the weak version much more 
than the strong version.

Cheers,
    Ron






From steve at pearwood.info  Fri Feb  6 05:48:19 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 6 Feb 2015 15:48:19 +1100
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxEKhbiZcYgL5bTxJJ_e7=HwbSbZVPXzGfDXtDtn7tQFT-w@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
 <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>
 <20150206004406.GB2498@ando.pearwood.info>
 <CALGmxEKhbiZcYgL5bTxJJ_e7=HwbSbZVPXzGfDXtDtn7tQFT-w@mail.gmail.com>
Message-ID: <20150206044819.GC2498@ando.pearwood.info>

On Thu, Feb 05, 2015 at 05:12:32PM -0800, Chris Barker wrote:
> On Thu, Feb 5, 2015 at 4:44 PM, Steven D'Aprano <steve at pearwood.info> wrote:
> 
> > > 0.0 < rel_tol < 1.0)
> >
> > I can just about see the point in restricting rel_tol to the closed
> > interval 0...1, but not the open interval. Mathematically, setting the
> > tolerance to 0 should just degrade gracefully to exact equality,
> 
> sure -- no harm done there.
> 
> > and a tolerance of 1 is nothing special at all.
> 
> well, I ended up putting that in because it turns out with the "weak" test,
> then anything compares as "close" to zero:

Okay. Maybe that means that the "weak test" is not useful if one of the 
numbers is zero. Neither is a relative tolerance, or an ULP calculation.


> tol>=1.0
> a = anything
> b = 0.0
> min( abs(a), abs(b) ) = 0.0
> abs(a-b) = a

That is incorrect. abs(a-b) for b == 0 is abs(a), not a.

> tol * a >= a
> abs(a-b) <= tol * a

If and only if a >= 0. Makes perfect mathematical sense, even if it's 
not useful. That's an argument for doing what Bruce Dawson says, and 
comparing against the maximum of a and b, not the minimum.


> Granted, that's actually the best argument yet for using the strong test --
> which I am suggesting, though I haven't thought out what that will do in
> the case of large tolerances.

It should work exactly the same as for small tolerances, except larger 
*wink*


> > Values larger than 1 aren't often useful, but there really is no reason
> > to exclude tolerances larger than 1. "Give or take 300%" (ie.
> > rel_tol=3.0) is a pretty big tolerance, but it is well-defined: a
> > difference of 299% is "close enough", 301% is "too far".
> >
> 
> yes it is, but then the whole weak vs string vs asymmetric test becomes
> important. 

Um, yes? I know Guido keeps saying that the difference is unimportant, 
but I think he is wrong: at the edges, the way you determine "close to" 
makes a difference whether a and b are considered close or not. If you 
care enough to specify a specific tolerance (say, 2.3e-4), as opposed to 
plucking a round number out of thin air, then you care about the edge 
cases. I'm not entirely sure what to do about it, but my sense is that 
we should do something.


> From my math the "delta" between the weak and strong tests goes
> with  tolerance**2 * max(a,b).  So if the tolerance is >=1, then it makes a
> big difference which test you choose. IN fact:
> 
> Is a within 300% of b makes sense, but "are a and b within 300% of
> each-other" is poorly defined.

No more so that "a and b within 1% of each other". It's just a 
short-hand. What I mean by "of each other" is the method recommended by 
Bruce Dawson, use the larger of a and b, what Boost(?) and you are 
calling the "strong test".


[...]
> > Negative error tolerances, on the other hand, do seem to be meaningless
> > and should be prevented.
> 
> 
> you could just take the abs(rel_tol), but really?  what's the point?

No no, I agree with you that tolerances (relative or absolute) should 
prohibit negative values. Or complex ones for that matter.


> > (E.g. "guess the number of grains of sand on this beach".) Any upper
> > limit you put in is completely arbitrary,
> 
> 
> somehow one doesn't feel arbitrary to me -- numbers aren't close if the
> difference between them is larger than the largest of the numbers -- not
> arbitrary, maybe unneccesary , but not arbirtrary

Consider one way of detecting outliers in numeric data: any number more 
than X standard deviations from the mean in either direction may be an 
outlier.

py> import statistics
py> data = [1, 2, 100, 100, 100, 101, 102, 103, 104, 500, 100000]
py> m = statistics.mean(data)
py> tol = 3*statistics.stdev(data)
py> [x for x in data if abs(x-m) > tol]
[100000]
py> m, tol, tol/m
(9201.181818181818, 90344.55455462009, 9.818798969508077)

tol/m is, of course, the error tolerance relative to m, which for the 
sake of the argument we are treating as the "known value": anything more 
than 9.818... times the mean is probably an outlier.

Now, the above uses an absolute tolerance, but I should be able to get 
the same results from a relative tolerance of 9.818... depending on 
which is more convenient to work with at the time.



-- 
Steve

From guido at python.org  Fri Feb  6 06:00:24 2015
From: guido at python.org (Guido van Rossum)
Date: Thu, 5 Feb 2015 21:00:24 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <20150206044819.GC2498@ando.pearwood.info>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
 <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>
 <20150206004406.GB2498@ando.pearwood.info>
 <CALGmxEKhbiZcYgL5bTxJJ_e7=HwbSbZVPXzGfDXtDtn7tQFT-w@mail.gmail.com>
 <20150206044819.GC2498@ando.pearwood.info>
Message-ID: <CAP7+vJJbf4XjJ64X7NStZJWny0+4FxJJY=xBSue=L3KR3kJXDw@mail.gmail.com>

Steven, I can't take it any longer. It is just absolutely ridiculous how
much discussion we've already seen about a function that's a single line of
code. I'll give you three choices. You can vote +1, 0 or -1. No more
discussion. If you still keep picking on details I'll just kill the PEP to
be done with the discussion.

On Thu, Feb 5, 2015 at 8:48 PM, Steven D'Aprano <steve at pearwood.info> wrote:

> On Thu, Feb 05, 2015 at 05:12:32PM -0800, Chris Barker wrote:
> > On Thu, Feb 5, 2015 at 4:44 PM, Steven D'Aprano <steve at pearwood.info>
> wrote:
> >
> > > > 0.0 < rel_tol < 1.0)
> > >
> > > I can just about see the point in restricting rel_tol to the closed
> > > interval 0...1, but not the open interval. Mathematically, setting the
> > > tolerance to 0 should just degrade gracefully to exact equality,
> >
> > sure -- no harm done there.
> >
> > > and a tolerance of 1 is nothing special at all.
> >
> > well, I ended up putting that in because it turns out with the "weak"
> test,
> > then anything compares as "close" to zero:
>
> Okay. Maybe that means that the "weak test" is not useful if one of the
> numbers is zero. Neither is a relative tolerance, or an ULP calculation.
>
>
> > tol>=1.0
> > a = anything
> > b = 0.0
> > min( abs(a), abs(b) ) = 0.0
> > abs(a-b) = a
>
> That is incorrect. abs(a-b) for b == 0 is abs(a), not a.
>
> > tol * a >= a
> > abs(a-b) <= tol * a
>
> If and only if a >= 0. Makes perfect mathematical sense, even if it's
> not useful. That's an argument for doing what Bruce Dawson says, and
> comparing against the maximum of a and b, not the minimum.
>
>
> > Granted, that's actually the best argument yet for using the strong test
> --
> > which I am suggesting, though I haven't thought out what that will do in
> > the case of large tolerances.
>
> It should work exactly the same as for small tolerances, except larger
> *wink*
>
>
> > > Values larger than 1 aren't often useful, but there really is no reason
> > > to exclude tolerances larger than 1. "Give or take 300%" (ie.
> > > rel_tol=3.0) is a pretty big tolerance, but it is well-defined: a
> > > difference of 299% is "close enough", 301% is "too far".
> > >
> >
> > yes it is, but then the whole weak vs string vs asymmetric test becomes
> > important.
>
> Um, yes? I know Guido keeps saying that the difference is unimportant,
> but I think he is wrong: at the edges, the way you determine "close to"
> makes a difference whether a and b are considered close or not. If you
> care enough to specify a specific tolerance (say, 2.3e-4), as opposed to
> plucking a round number out of thin air, then you care about the edge
> cases. I'm not entirely sure what to do about it, but my sense is that
> we should do something.
>
>
> > From my math the "delta" between the weak and strong tests goes
> > with  tolerance**2 * max(a,b).  So if the tolerance is >=1, then it
> makes a
> > big difference which test you choose. IN fact:
> >
> > Is a within 300% of b makes sense, but "are a and b within 300% of
> > each-other" is poorly defined.
>
> No more so that "a and b within 1% of each other". It's just a
> short-hand. What I mean by "of each other" is the method recommended by
> Bruce Dawson, use the larger of a and b, what Boost(?) and you are
> calling the "strong test".
>
>
> [...]
> > > Negative error tolerances, on the other hand, do seem to be meaningless
> > > and should be prevented.
> >
> >
> > you could just take the abs(rel_tol), but really?  what's the point?
>
> No no, I agree with you that tolerances (relative or absolute) should
> prohibit negative values. Or complex ones for that matter.
>
>
> > > (E.g. "guess the number of grains of sand on this beach".) Any upper
> > > limit you put in is completely arbitrary,
> >
> >
> > somehow one doesn't feel arbitrary to me -- numbers aren't close if the
> > difference between them is larger than the largest of the numbers -- not
> > arbitrary, maybe unneccesary , but not arbirtrary
>
> Consider one way of detecting outliers in numeric data: any number more
> than X standard deviations from the mean in either direction may be an
> outlier.
>
> py> import statistics
> py> data = [1, 2, 100, 100, 100, 101, 102, 103, 104, 500, 100000]
> py> m = statistics.mean(data)
> py> tol = 3*statistics.stdev(data)
> py> [x for x in data if abs(x-m) > tol]
> [100000]
> py> m, tol, tol/m
> (9201.181818181818, 90344.55455462009, 9.818798969508077)
>
> tol/m is, of course, the error tolerance relative to m, which for the
> sake of the argument we are treating as the "known value": anything more
> than 9.818... times the mean is probably an outlier.
>
> Now, the above uses an absolute tolerance, but I should be able to get
> the same results from a relative tolerance of 9.818... depending on
> which is more convenient to work with at the time.
>
>
>
> --
> Steve
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/941408fe/attachment-0001.html>

From schesis at gmail.com  Fri Feb  6 06:07:36 2015
From: schesis at gmail.com (Zero Piraeus)
Date: Fri, 6 Feb 2015 02:07:36 -0300
Subject: [Python-ideas] PEP 485: A Function for testing approximate
 equality
In-Reply-To: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
Message-ID: <20150206050736.GA21679@piedra>

:

On Wed, Feb 04, 2015 at 04:48:41PM -0800, Chris Barker wrote:
> 
> Please take a look and express your approval or disapproval (+1, +0,
> -0, -1, or whatever).

+1

 -[]z.

-- 
Zero Piraeus: inter caetera
http://etiol.net/pubkey.asc

From steve at pearwood.info  Fri Feb  6 06:10:41 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 6 Feb 2015 16:10:41 +1100
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <mb1dse$he2$1@ger.gmane.org>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <CAP7+vJKGeS_WF1yuwjGZEZkQWvYPMUSos7s65KfEmX=mLL6KMg@mail.gmail.com>
 <CALGmxEKmW53QWQhrCen4RPFrd=xCQF4k8qKKLUCr04L+dxWt9g@mail.gmail.com>
 <20150206004406.GB2498@ando.pearwood.info> <mb1dse$he2$1@ger.gmane.org>
Message-ID: <20150206051041.GD2498@ando.pearwood.info>

On Thu, Feb 05, 2015 at 09:53:49PM -0600, Ron Adam wrote:

> I prefer the weak version myself.  Because if you graph the result of True 
> values for rel_tolerance=1.  You get a graph where all like signed numbers 
> are close.   A tolerance of .5 gives a graph of the fifty percent of middle 
> like signed numbers.  And you can think of it as a percentage of the larger 
> value.  Which tends to be easier than thinking about percent increase.

I'm afraid I don't understand this description.


> Also this recommends using this method.
> 
>     http://c-faq.com/fp/fpequal.html

You stopped reading too soon:

[quote]
Doug Gwyn suggests using the following ``relative difference'' function. 
It returns the relative difference of two real numbers: 0.0 if they are 
exactly the same, otherwise the ratio of the difference to the larger of 
the two.
[end quote]

Anyone got a copy of Knuth handy and check the reference given above?

Knuth Sec. 4.2.2 pp. 217-8

I'd like to know what he has to say.



-- 
Steve

From abarnert at yahoo.com  Fri Feb  6 07:12:23 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Thu, 5 Feb 2015 22:12:23 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxEJHrVR7PauAL95y3bkS_CwbxOrPbW4rH1UQFkdK9XYqbw@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <166808118.305191.1423176381768.JavaMail.yahoo@mail.yahoo.com>
 <CALGmxEJHrVR7PauAL95y3bkS_CwbxOrPbW4rH1UQFkdK9XYqbw@mail.gmail.com>
Message-ID: <50BCB9CE-AD4F-49AD-921E-3D51871A3C8B@yahoo.com>

On Feb 5, 2015, at 16:53, Chris Barker <chris.barker at noaa.gov> wrote:

> On Thu, Feb 5, 2015 at 2:46 PM, Andrew Barnert <abarnert at yahoo.com> wrote:
>> >Non-float types
>> >---------------
>> >
>> >The primary use-case is expected to be floating point numbers.
>> >However, users may want to compare other numeric types similarly. In
>> >theory, it should work for any type that supports ``abs()``,
>> >comparisons, and subtraction.  The code will be written and tested to
>> 
>> >accommodate these types:
>> 
>> Surely the type also has to support multiplication (or division, if you choose to implement it that way) as well, right?
> 
> yes, multiplication, not sure how I  missed that. No division necessary.
>  
>> Also, are you sure your implementation doesn't need any explicit isinf/isnan checks? You mentioned something about cmath.isnan for complex numbers.
> 
> yeah, there's that. Though it seems to work fine for Fraction and Decimal so far -- I'm taking a TDD approach -- if it works with a test, I don't worry about it ;-)

Well, it would be nice if the docs could list the requirements on the type. It should be pretty simple to get them just by instrumenting a numeric type and seeing what gets called, if it isn't obvious enough from the code.

It would be even nicer if it happened to be a subset of the methods of numbers.Complex, so you could just say any Complex or Complex-like type (IIRC, Decimal doesn't subclass that because it doesn't quite qualify?) and most people could stop reading there. (Maybe you could even use that as the static type in the annotations/stub?) But if that isn't possible, that's fine.

> This brings up a bigger issue, that I'm pretty nuetral on:
> 
> What to do about duck-typing and various types it may get called with? My intent was to count of duck typing completely:

I think that makes sense--if it's possible, do that; if you have to add any special casing for any important type, I guess you have to, but hopefully you don't.

Calling math.isnan(abs(x)) instead of math.isnan(x) should work with any Complex, and with Decimal. 

Or you can even try isnan and on TypeError assume it's not nan, and then it'll work on "number-ish" types too.

The zero-construction issue might also show up with some number-ish types, but I'd guess that most types that have __float__ also have a constructor from float, so that probably won't hurt much.

The other requirements (that __sub__ and __mul__ are sensible, __abs__ returns a non-complex-ish version of the same type, etc.) all seem like they shouldn't be a problem at all for anything at all number-ish.

> I can't possible test with all types, so what else to do?

You can build a custom stupid
numeric type that only implements exactly what you claim to require. That's good evidence your duck typing and your documentation match.

Also, it might be worth testing what happens with, say, a numpy array. That seems be a good test of the "it works it raises, for any reasonable type". (Testing a numpy matrix might be a further useful test, except I'm not sure matrix counts as a reasonable type.)

> Would datetime.timedetla be worth explicitly supporting?

I don't think it's important enough. If you _implicitly_ support it through duck typing, that's cool (and a good sign that it'll work on many reasonable third-party types), but I wouldn't put extra work into it, especially not special casing, especially if you don't need any special casing for the important types.

> the concept of a two datetimes being relatively close doesn't make any sense -- relative to what????

Well, _absolute_ tolerance for datetimes makes sense (I've even used it in real code...), but since isclose is primarily a relative tolerance function, yeah, definitely don't try to make that work.

> And abs() fails for datetime already. I'll check timedelta -- it should work, I  think -- abs() does anyway.

But I'm pretty sure math.isnan(abs(x)) doesn't. That's exactly what the try/except could solve, if you want to solve it.

>> >How much difference does it make?
>> 
>> >---------------------------------
>> 
>> This section relies on transformations that are valid for reals but not for floats. In particular:
> 
> does this effect the result? I also assumed a and b are positive and a>b, just to make all the math easier (and easier to write). I'm not really trying to do a proper rigorous formal proof here. And it does check out computationally -- at least with comparing to 1.0,

Well, that's the whole point--when your values are all around 1.0, float issues are a whole lot simpler. Does it work for extremal and near-extremal values and values about 1e9 away from the extremes? If you're trying to make the point (semi-)formally by deriving based on reals and then testing for rounding, underflow, and overflow, that's what you need to test on.

On the other hand, I think (as you hint later) that it might be better just to not try to make the point formally in the PEP.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150205/0296f679/attachment.html>

From marco.buttu at gmail.com  Fri Feb  6 07:21:42 2015
From: marco.buttu at gmail.com (Marco Buttu)
Date: Fri, 06 Feb 2015 07:21:42 +0100
Subject: [Python-ideas] [PEP8] Predicate consistency
In-Reply-To: <20150205191223.45837f66@anarchist.wooz.org>
References: <54D3F082.8080803@oa-cagliari.inaf.it>
 <20150205191223.45837f66@anarchist.wooz.org>
Message-ID: <54D45D76.3070106@oa-cagliari.inaf.it>

On 06/02/2015 01:12, Barry Warsaw wrote:
> On Feb 05, 2015, at 11:36 PM, Marco Buttu wrote:
>
>> >Hi all, the PEP 8 says that function and method names should be lowercase,
>> >with words separated by underscores as necessary to improve
>> >readability. Maybe it could be a good idea to specify in the PEP8 that the
>> >predicates should not be separated by underscores, because it is pretty
>> >annoying to see this kind of inconsistency:
>> >
>> >     str.isdigit()
>> >     float.is_integer()
> I don't think PEP 8 should change.  Even words in predicates should be
> separated by an underscore.

Are you proposing to change our mind? Because currently we have both in 
the core and in the standard library mainly predicates *not* separated 
by underscores:

isinstance()
issubclass()
hasattr()
str.is*
set.is*
bytearray.is*
math.is*
os.is*
...

And recently: asyncio.iscoroutinefunction(). Your proposal seems to me 
really inconsistent with the whole codebase, and I think the reason 
sometime we get wrong (as in the case of float.is_integer()) is a lack 
(about predicates) in the PEP8. The PEP8 suggests that we should use 
"words separated by underscores as necessary to improve readability", 
and that's why sometimes we write is_foo() instead of isfoo(). I believe 
your suggestion is an evidence of that lack.

So, I am proposing to be more clear, maybe by writing somethink like that:

"""
Function names should be lowercase and, except for predicates, with 
words separated by underscores as necessary to improve readability.
Yes:

# Do not use underscores in predicates
isinstance()
hasattr()
# Use underscores to improve readability
dont_write_bytecode()

No:

is_instance()
has_attr()
dontwritebytecode()
"""

We have to change just this part of the PEP8, because the method section 
says to use the function naming rules.

> But it's not worth it to go back and change old APIs that have been in existence forever.

I am not so crazy ;) I am just proposing a clear rule for predicates in 
order to avoid, _in the future_, inconsistency with the current general 
rule.

-- 
Marco Buttu

INAF-Osservatorio Astronomico di Cagliari
Via della Scienza n. 5, 09047 Selargius (CA)
Phone: 070 711 80 217
Email: mbuttu at oa-cagliari.inaf.it


From ron3200 at gmail.com  Fri Feb  6 07:36:13 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Fri, 06 Feb 2015 00:36:13 -0600
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <50BCB9CE-AD4F-49AD-921E-3D51871A3C8B@yahoo.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <166808118.305191.1423176381768.JavaMail.yahoo@mail.yahoo.com>
 <CALGmxEJHrVR7PauAL95y3bkS_CwbxOrPbW4rH1UQFkdK9XYqbw@mail.gmail.com>
 <50BCB9CE-AD4F-49AD-921E-3D51871A3C8B@yahoo.com>
Message-ID: <mb1nct$rqp$1@ger.gmane.org>



On 02/06/2015 12:12 AM, Andrew Barnert wrote:
> On the other hand, I think (as you hint later) that it might be better just
> to not try to make the point formally in the PEP.

And it's a whole lot simpler to write...

        if isclose(x, y):
             ...

instead of...

        if y - y * 1e-8 <= x <= y + y * 1e-8:
             ...


I think the isclose() line is much easier to read!

And the function will probably get it right, where I may get one of the 
signs reversed, or make some other silly error in the expression.

So +1 from me.

Cheers,
    Ron


From abarnert at yahoo.com  Fri Feb  6 07:45:15 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Thu, 5 Feb 2015 22:45:15 -0800
Subject: [Python-ideas] [PEP8] Predicate consistency
In-Reply-To: <54D45D76.3070106@oa-cagliari.inaf.it>
References: <54D3F082.8080803@oa-cagliari.inaf.it>
 <20150205191223.45837f66@anarchist.wooz.org>
 <54D45D76.3070106@oa-cagliari.inaf.it>
Message-ID: <55FBD8C3-4343-401A-AB7C-160B216C8594@yahoo.com>

On Feb 5, 2015, at 22:21, Marco Buttu <marco.buttu at gmail.com> wrote:

> On 06/02/2015 01:12, Barry Warsaw wrote:
>> On Feb 05, 2015, at 11:36 PM, Marco Buttu wrote:
>> 
>>> >Hi all, the PEP 8 says that function and method names should be lowercase,
>>> >with words separated by underscores as necessary to improve
>>> >readability. Maybe it could be a good idea to specify in the PEP8 that the
>>> >predicates should not be separated by underscores, because it is pretty
>>> >annoying to see this kind of inconsistency:
>>> >
>>> >     str.isdigit()
>>> >     float.is_integer()
>> I don't think PEP 8 should change.  Even words in predicates should be
>> separated by an underscore.
> 
> Are you proposing to change our mind? Because currently we have both in the core and in the standard library mainly predicates *not* separated by underscores:
> 
> isinstance()
> issubclass()
> hasattr()
> str.is*
> set.is*
> bytearray.is*
> math.is*
> os.is*
> ...
> 
> And recently: asyncio.iscoroutinefunction(). Your proposal seems to me really inconsistent with the whole codebase, 

The PEP already says, right in the existing sentence you quoted, "as necessary to improve readability", not "always". It's not necessary in "isnan". 

Elsewhere the PEP talks about consistency with related code. If "isnan" is documented to do effectively the same as the C function of the same name it makes sense to use the same name. That's presumably also why "inspect.isgeneratorfunction" doesn't use underscores--they're arguably necessary, but consistency with "isfunction" trumps that.

Of course it's a judgment call in borderline cases, but that's true for most of PEP 8, and it's not a problem.

Also, are you sure it's really "predicate" that's the relevant difference, as opposed to, say, "short word and really short word" (which is really just a special case of "not necessary for readability").

You use "dont_write_bytecode" as an example of something that obviously needs the underscores. I think the parallel predicate looks better as "doesnt_write_bytecode" than "doesntwritebytecode" (although actually, I'd try to avoid both "don't" and "doesn't" because of the apostrophes...).

And beyond your own example, "is_greater_than_one" or "can_fit_on_filesystem" seem like they need underscores, while "dotwice" seems fine without.

From marco.buttu at gmail.com  Fri Feb  6 09:06:29 2015
From: marco.buttu at gmail.com (Marco Buttu)
Date: Fri, 06 Feb 2015 09:06:29 +0100
Subject: [Python-ideas] [PEP8] Predicate consistency
In-Reply-To: <55FBD8C3-4343-401A-AB7C-160B216C8594@yahoo.com>
References: <54D3F082.8080803@oa-cagliari.inaf.it>
 <20150205191223.45837f66@anarchist.wooz.org>
 <54D45D76.3070106@oa-cagliari.inaf.it>
 <55FBD8C3-4343-401A-AB7C-160B216C8594@yahoo.com>
Message-ID: <54D47605.8000202@oa-cagliari.inaf.it>

On 06/02/2015 07:45, Andrew Barnert wrote:

> Elsewhere the PEP talks about consistency with related code. If "isnan" is documented to do effectively the same as the C function of the same name it makes sense to use the same name. That's presumably also why "inspect.isgeneratorfunction" doesn't use underscores--they're arguably necessary, but consistency with "isfunction" trumps that.

Your are right we have to keep consistency with other code, but I think 
we should prefer *horizontal consistency* (I mean with our codebase API) 
over a *vertical* one.

> Also, are you sure it's really "predicate" that's the relevant difference, as opposed to, say, "short word and really short word" (which is really just a special case of "not necessary for readability").

Looking at the core and the standard library, I see we basically do not 
use underscore in get/set and predicates. This may be a clear general 
rule, and not "short word" that is not clear at all. How can you explain 
the following?

sys.getswitchinterval()
sys.setswitchinterval()
asyncio.iscoroutinefunction()

and:

sys.exc_info()
asyncio.wait_for()

In these cases you can see we do not use the rule "short words", but 
almost always the rule: "get/set and predicates should not have 
underscores". I wrote almost always because sometimes we break that rule:

asyncio.get_event_loop()
float.is_integer()

And my point is that we break it because of a lack in the PEP8. But 
perphaps I am alone thinking that is ugly to have this kind of 
inconsistency in predicates and get/set function/method names


> You use "dont_write_bytecode" as an example of something that obviously needs the underscores.

I think it is not obvious at all. It is obvious just in the case we have 
a rule about predicates and get/set, othewise I do not understand how 
come it is possible that sys.dont_write_bytecode() should obviously have 
underscores but sys.setswitchinterval() should not.

> And beyond your own example, "is_greater_than_one" or "can_fit_on_filesystem" seem like they need underscores, while "dotwice" seems fine without.

I said the opposite: is_greather_then_one() should have unserscores, and 
as the PEP8 says, the naming rule is the same for functions and methods. 
In the standard library we do not have any can_* function or method, but 
if in the future we will have one, then yes, I propose it should be 
consistent with the "predicate rule" (no underscores).

-- 
Marco Buttu

INAF-Osservatorio Astronomico di Cagliari
Via della Scienza n. 5, 09047 Selargius (CA)
Phone: 070 711 80 217
Email: mbuttu at oa-cagliari.inaf.it


From marco.buttu at gmail.com  Fri Feb  6 09:10:27 2015
From: marco.buttu at gmail.com (Marco Buttu)
Date: Fri, 06 Feb 2015 09:10:27 +0100
Subject: [Python-ideas] [PEP8] Predicate consistency
In-Reply-To: <54D47605.8000202@oa-cagliari.inaf.it>
References: <54D3F082.8080803@oa-cagliari.inaf.it>
 <20150205191223.45837f66@anarchist.wooz.org>
 <54D45D76.3070106@oa-cagliari.inaf.it>
 <55FBD8C3-4343-401A-AB7C-160B216C8594@yahoo.com>
 <54D47605.8000202@oa-cagliari.inaf.it>
Message-ID: <54D476F3.9030609@oa-cagliari.inaf.it>

On 06/02/2015 09:06, Marco Buttu wrote:
>
>> And beyond your own example, "is_greater_than_one" or 
>> "can_fit_on_filesystem" seem like they need underscores, while 
>> "dotwice" seems fine without.
>
> I said the opposite: is_greather_then_one() should have unserscores, 
> and as the PEP8 says, the naming rule is the same for functions and 
> methods. In the standard library we do not have any can_* function or 
> method, but if in the future we will have one, then yes, I propose it 
> should be consistent with the "predicate rule" (no underscores). 

Sorry, I mean: is_greather_then_one() should *not* have unserscores


-- 
Marco Buttu

INAF-Osservatorio Astronomico di Cagliari
Via della Scienza n. 5, 09047 Selargius (CA)
Phone: 070 711 80 217
Email: mbuttu at oa-cagliari.inaf.it


From abarnert at yahoo.com  Fri Feb  6 11:16:07 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Fri, 6 Feb 2015 02:16:07 -0800
Subject: [Python-ideas] [PEP8] Predicate consistency
In-Reply-To: <54D47605.8000202@oa-cagliari.inaf.it>
References: <54D3F082.8080803@oa-cagliari.inaf.it>
 <20150205191223.45837f66@anarchist.wooz.org>
 <54D45D76.3070106@oa-cagliari.inaf.it>
 <55FBD8C3-4343-401A-AB7C-160B216C8594@yahoo.com>
 <54D47605.8000202@oa-cagliari.inaf.it>
Message-ID: <9D8CE060-E065-434E-8A47-EA061CC98184@yahoo.com>

On Feb 6, 2015, at 0:06, Marco Buttu <marco.buttu at gmail.com> wrote:

> On 06/02/2015 07:45, Andrew Barnert wrote

You've skipped over the most important point, the fact that the guideline explicitly says to separate words with underscores "as necessary to improve readability", not "always". Given that the entire PEP is a set of sometimes contradictory rules of thumb, very few parts of it explicitly call out the fact that they're not universal, hard and fast rules, so it's presumably important that this one actually does so. And yet you're insisting that we need extra special-case rules to give people permission to violate this rule?

>> Elsewhere the PEP talks about consistency with related code. If "isnan" is documented to do effectively the same as the C function of the same name it makes sense to use the same name. That's presumably also why "inspect.isgeneratorfunction" doesn't use underscores--they're arguably necessary, but consistency with "isfunction" trumps that.
> 
> Your are right we have to keep consistency with other code, but I think we should prefer *horizontal consistency* (I mean with our codebase API) over a *vertical* one.

Do you have some examples where they give different answers?

Also, note that functions like math.isnan and os.getpid predate PEP 8, and aren't likely to be changed for backward compat reasons, which means any new function that's a thin wrapper around a C/POSIX stdlib function likely has as much of a horizontal consistency reason as a vertical one--e.g., if you were to add math.isnormal as a wrapper around C isnormal, you'd probably want it to be consistent with math.isnan and friends just as much as with C isnormal, right?

In fact, if you look at newer functions that aren't very close parallels to existing functions in the same module or very thin wrappers around C functions, your rule doesn't seem to work. For example, in os again, getresguid matches the BSD C function and matches existing os.getguid, while get_exec_path and get_terminal_size don't have such major consistency constraints and they have underscores.

Note that your new rule would leave us in the same position--we'd have some original-PEP-8-named legacy functions that we'd sometimes have to stay consistent with.

Also note that this seems to work the same way for global variables/constants as for functions, as you'd expect. sys.maxunicode has a strong parallel with legacy sys.maxsize, while other attributes added around the same time mostly have underscores.

>> Also, are you sure it's really "predicate" that's the relevant difference, as opposed to, say, "short word and really short word" (which is really just a special case of "not necessary for readability").
> 
> Looking at the core and the standard library, I see we basically do not use underscore in get/set and predicates.

Except when we do. As in the os examples above, or all the predicate methods in pathlib.Path, or many other places where there are no relevant consistency issues.

> This may be a clear general rule, and not "short word" that is not clear at all. How can you explain the following?
> 
> sys.getswitchinterval()
> sys.setswitchinterval()
> asyncio.iscoroutinefunction()

The first two are consistent with the legacy (pre-PEP8) get/setcheckinterval function, and the last is consistent with isgeneratorfunction, itself consistent with isfunction as you (I think) accepted above.

> And my point is that we break it because of a lack in the PEP8.

Or we break it because sometimes consistency beats readability, and maybe occasionally some other guideline, and in rare cases because the "necessary for readability" clause is a tough judgment call.

> But perphaps I am alone thinking that is ugly to have this kind of inconsistency in predicates and get/set function/method names

Given that both Python 2.1 and 3.0 decided it wasn't worth changing all the legacy names for backward-compat reasons, and you're not suggesting changing them today, I don't think there's any way to avoid having that inconsistency.

But following the existing PEP 8 guidelines--adding underscores when necessary for readability except when another guideline like consistency trumps it--hasn't been a problem for the last 13 years, so I don't see it making things worse over the next 13.

>> You use "dont_write_bytecode" as an example of something that obviously needs the underscores.
> 
> I think it is not obvious at all. It is obvious just in the case we have a rule about predicates and get/set, othewise I do not understand how come it is possible that sys.dont_write_bytecode() should obviously have underscores but sys.setswitchinterval() should not.

Once again you've skipped over the most important point. Are you seriously suggesting that if we have a pair of related functions, they should be named "dont_write_bytecode" and "doesntwritebytecode" because the latter is a predicate?

>> And beyond your own example, "is_greater_than_one" or "can_fit_on_filesystem" seem like they need underscores, while "dotwice" seems fine without.
> 
> I said the opposite: is_greather_then_one() should have unserscores

No you didn't. You said predicates should _not_ have underscores. But under current PEP 8, this one probably should, and I think that's better. (If you don't agree with that one, consider a name with vowels at the word edges, like "is_any_one_among" and imagine that without the underscores.)

> , and as the PEP8 says, the naming rule is the same for functions and methods. In the standard library we do not have any can_* function or method,

From a quick ack I see have can_write and friends in asyncio, can_fs_encode in distutils, cannot_convert in lib2to3, can_fetch in urllib/robotparser, a variety of can_ functions in the test suite... and not a single can\w except for cancel, canvas, and canonical.

> but if in the future we will have one, then yes, I propose it should be consistent with the "predicate rule" (no underscores).

So you'd make every current can* function in the stdlib a legacy function that doesn't follow the current rules?

I think you're going too far trying to make the rules complete, unambiguous, and inviolable when keeping them simple and letting humans use their judgment on the edge cases makes a lot more sense. Adding a sweeping exception to the rule that itself needs exceptions that will ultimately have a less compelling legacy justification doesn't seem like a step forward. M

From solipsis at pitrou.net  Fri Feb  6 13:28:52 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Fri, 6 Feb 2015 13:28:52 +0100
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
Message-ID: <20150206132852.2974b66c@fsol>


Ok, more simply then: does is_close_to(0.0, 0.0) return True?



On Thu, 5 Feb 2015 16:24:21 -0800
Chris Barker <chris.barker at noaa.gov> wrote:
> On Thu, Feb 5, 2015 at 3:04 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> 
> 
> > I think it is a nuisance that this PEP doesn't work at zero by default.
> >
> 
> I think this is the biggest point of contention -- it' s a hard problem.
> 
> If this is meant to ease the life of non-specialist users then it
> > should do the right thing out of the box, IMO.
> >
> 
> That's the trick -- there simply IS NO "right thing" to be done out of the
> box. A relative comparison to zero is mathematically and computationally
> impossible -- you need an absolute comparison. And to set a default for
> that you need to know the general magnitude of the values your user is
> going to have -- and we can't know that.
> 
> If we really believe that almost all people will be working with numbers of
> around a magnitude of 1, then we could set a default -- by why in the world
> are you using floating point numbers with a range of about 1e-300 to 1e300
> in that case?
> 
> NOTE: numpy.allclose() uses an absolute tolerance default of 1e-08 -- so
> someone thought that was a good idea. But as I've been thinking about all
> this, I decided it was very dangerous. In fact, I just did a quick grep of
> the unit tests in my current code base: 159 uses of allclose, and almost
> none with an absolute tolerance specified -- now I need to go and check
> those, most of them probably need a more carefully thought out value.
> 
> I'm (and my team) are just one careless programmer, I guess, but I don't
> think I'm the only one that tends to use defaults, and only go back and
> think about if a test fails. And as you say, the "non-specialist users" are
> the most likely to be careless.
> 
> -Chris
> 
> 
> 
> 




From p.f.moore at gmail.com  Fri Feb  6 14:35:43 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Fri, 6 Feb 2015 13:35:43 +0000
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <20150206132852.2974b66c@fsol>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
Message-ID: <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>

On 6 February 2015 at 12:28, Antoine Pitrou <solipsis at pitrou.net> wrote:
> Ok, more simply then: does is_close_to(0.0, 0.0) return True?

>From the formula in the PEP

   """abs(a-b) <= max( rel_tolerance * min(abs(a), abs(b), abs_tolerance )"""

yes it does. More generally, is_close_to(x, x) is always true. That's
a key requirement - that "closeness" includes equality.

I think the "weirdness" around zero is simply that there's no x for
which is_close_to(x, 0) and x != 0. Which TBH isn't really all that
weird :-)

FWIW, I'm +0.5 with the PEP. (It lost a 0.5 for no other reason than I
don't actually have a use for the function, so it's mostly theoretical
interest for me).

Paul

From ncoghlan at gmail.com  Fri Feb  6 14:48:36 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 6 Feb 2015 23:48:36 +1000
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
Message-ID: <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>

On 6 February 2015 at 23:35, Paul Moore <p.f.moore at gmail.com> wrote:
> On 6 February 2015 at 12:28, Antoine Pitrou <solipsis at pitrou.net> wrote:
>> Ok, more simply then: does is_close_to(0.0, 0.0) return True?
>
> From the formula in the PEP
>
>    """abs(a-b) <= max( rel_tolerance * min(abs(a), abs(b), abs_tolerance )"""
>
> yes it does. More generally, is_close_to(x, x) is always true. That's
> a key requirement - that "closeness" includes equality.
>
> I think the "weirdness" around zero is simply that there's no x for
> which is_close_to(x, 0) and x != 0. Which TBH isn't really all that
> weird :-)

Right, by default it degrades to exact equality when one of the
operands is zero, just as it will degrade to exact equality when the
relative tolerance is set to zero. This should probably be stated
explicitly in the PEP, since it isn't immediately obvious from the
formal definition.

+1 on the PEP from me - it meets the goal I suggested of being clearly
better for comparing floating point values than standard equality, and
manages to hide most of the surprising complexity of floating point
comparisons.

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From chris.barker at noaa.gov  Fri Feb  6 16:51:41 2015
From: chris.barker at noaa.gov (Chris Barker - NOAA Federal)
Date: Fri, 6 Feb 2015 07:51:41 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <20150206132852.2974b66c@fsol>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
Message-ID: <-7976682032941849255@unknownmsgid>

> On Feb 6, 2015, at 4:29 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>
>
> Ok, more simply then: does is_close_to(0.0, 0.0) return True?

Yes.

But isclose(0.0, something_greater_than_zero)

Will not.

Chris


>
>
>
> On Thu, 5 Feb 2015 16:24:21 -0800
> Chris Barker <chris.barker at noaa.gov> wrote:
>> On Thu, Feb 5, 2015 at 3:04 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>>
>>
>>> I think it is a nuisance that this PEP doesn't work at zero by default.
>>
>> I think this is the biggest point of contention -- it' s a hard problem.
>>
>> If this is meant to ease the life of non-specialist users then it
>>> should do the right thing out of the box, IMO.
>>
>> That's the trick -- there simply IS NO "right thing" to be done out of the
>> box. A relative comparison to zero is mathematically and computationally
>> impossible -- you need an absolute comparison. And to set a default for
>> that you need to know the general magnitude of the values your user is
>> going to have -- and we can't know that.
>>
>> If we really believe that almost all people will be working with numbers of
>> around a magnitude of 1, then we could set a default -- by why in the world
>> are you using floating point numbers with a range of about 1e-300 to 1e300
>> in that case?
>>
>> NOTE: numpy.allclose() uses an absolute tolerance default of 1e-08 -- so
>> someone thought that was a good idea. But as I've been thinking about all
>> this, I decided it was very dangerous. In fact, I just did a quick grep of
>> the unit tests in my current code base: 159 uses of allclose, and almost
>> none with an absolute tolerance specified -- now I need to go and check
>> those, most of them probably need a more carefully thought out value.
>>
>> I'm (and my team) are just one careless programmer, I guess, but I don't
>> think I'm the only one that tends to use defaults, and only go back and
>> think about if a test fails. And as you say, the "non-specialist users" are
>> the most likely to be careless.
>>
>> -Chris
>
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From mal at egenix.com  Fri Feb  6 17:07:33 2015
From: mal at egenix.com (M.-A. Lemburg)
Date: Fri, 06 Feb 2015 17:07:33 +0100
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>	<20150206000438.1adeb478@fsol>	<CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>	<20150206132852.2974b66c@fsol>	<CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
Message-ID: <54D4E6C5.1080804@egenix.com>

There seems to be a typo in the PEP:

"""
The new function will have the following signature:

is_close(a, b, rel_tolerance=1e-9, abs_tolerance=0.0)

a and b : are the two values to be tested to relative closeness

rel_tolerance : is the relative tolerance -- it is the amount of error allowed, relative to the
magnitude a and b. For example, to set a tolerance of 5%, pass tol=0.05. The default tolerance is
1e-8, which assures that the two values are the same within about 8 decimal digits.

abs_tolerance : is an minimum absolute tolerance level -- useful for comparisons near zero.
"""

Note that the description says 1e-8 whereas the signature says
1e-9.

The github link in the PEP gives a 404:

https://github.com/PythonCHB/close_pep/blob/master


In general, I think the PEP approaches the problem from a slightly
different angle than one might expect.

I don't really want to continue the bike shedding around the PEP, but
just add a slightly different perspective which might be useful as
additional comment in the PEP.

In practice, you'd probably want to check whether the result of a computation
(the approximation) is within an certain *precision range* around the
true value, in other words, you're interested in either the
absolute error of your calculation when assuming that the true value
is 0.0 and the relative one otherwise.

Say your true value is v and the calculated one v_approx.
The absolute error is abs(v - v_approx), the relative error
abs(1 - v_approx/v) (for v != 0).

This gives the following implementation:

"""
_debug = False

def reasonably_close(v_approx, v, max_rel_error=1e-8, max_abs_error=1e-8):

    """ Check whether the calculated value v_approx is within
        error range of the true value v.

        If v == 0.0, an absolute error check is done using max_abs_error as
        valid error range.

        For all other values, a relative error check is done using
        max_rel_error as basis.

        This is done, since it is assumed that if you're calculating a value
        close to a value, you'd expect the calculation to be correct within
        a certain precision range.

    """
    if v == 0.0:
        # Use the absolute error as basis when checking calculations which
        # should return 0.0.
        abs_error = abs(v_approx - v)
        if _debug:
            print ('abs error = %s' % abs_error)
        return (abs_error <= max_abs_error)

    # Use the relative error for all other true values.
    rel_error = abs(1 - v_approx / v)
    if _debug:
        print ('rel error = %s' % rel_error)
    return (rel_error <= max_rel_error)

# Test equal values
assert reasonably_close(0.0, 0.0)
assert reasonably_close(1e-8, 1e-8)
assert reasonably_close(1.0, 1.0)

# Test around 0.0
assert reasonably_close(1e-8, 0.0)
assert not reasonably_close(2e-8, 0.0)
assert reasonably_close(1e-8, 1e-8)
assert not reasonably_close(2e-8, 0.0)

# Test close to 0.0
assert reasonably_close(1e-8 + 1e-20, 1e-8)
assert reasonably_close(1e-8 + 1e-16, 1e-8)
assert not reasonably_close(1e-8 + 2e-16, 1e-8)

# Test around 1.0
assert reasonably_close(1 + 1e-8, 1.0)
assert not reasonably_close(1 + 2e-8, 1.0)

# Test large values
assert reasonably_close(1e8 + 1, 1e8)
assert not reasonably_close(1e8 + 2, 1e8)
assert reasonably_close(1e9 + 1, 1e9)
assert reasonably_close(1e9 + 2, 1e9)
assert reasonably_close(1e9 + 10, 1e9)
assert not reasonably_close(1e9 + 11, 1e9)
assert reasonably_close(1e-9, 0.0)
assert reasonably_close(1 + 1e-9, 1.0)

# Test negative values
assert reasonably_close(-1.0, -1.0)
assert reasonably_close(-1e-8, 0.0)
assert reasonably_close(-1 - 1e-8, -1.0)
assert reasonably_close(-1e-9, 0.0)
assert reasonably_close(-1 - 1e-9, -1.0)

# Test different signs
assert not reasonably_close(0.0, 1.0)
assert not reasonably_close(0.0, -1.0)
assert not reasonably_close(1.0, -1.0)
"""

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Feb 06 2015)
>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>> mxODBC Plone/Zope Database Adapter ...       http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::::: Try our mxODBC.Connect Python Database Interface for free ! ::::::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From Nikolaus at rath.org  Fri Feb  6 17:24:36 2015
From: Nikolaus at rath.org (Nikolaus Rath)
Date: Fri, 06 Feb 2015 08:24:36 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CAPJVwBnRMKQTuKzUZhA49SaPbitJF1fQobhgHs487=w4NshYKQ@mail.gmail.com>
 (Nathaniel Smith's message of "Thu, 5 Feb 2015 17:07:46 -0800")
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <CAPJVwBnRMKQTuKzUZhA49SaPbitJF1fQobhgHs487=w4NshYKQ@mail.gmail.com>
Message-ID: <8761bfkny3.fsf@thinkpad.rath.org>

Nathaniel Smith <njs-e+AXbWqSrlAAvxtiuMwx3w at public.gmane.org> writes:
> It's an observable fact, though, that people generally do take care to
> ensure that their numbers are within a few orders of magnitude of 1,
> e.g. by choosing appropriate units, log-transforming, etc.

Do you have a credible source for that, or are you just speculating? At
least in my field I'm regularly working with e.g. particle densities in
the order of 10^19, and I've never seen anyone log-transforming these
numbers (let alone inventing new units for them).

> You could just as reasonably ask why in the world SI prefixes exist
> when we have access to scientific notation. And I wouldn't know the
> answer :-). But they do!

Because they are shorter to write, and easier to parse. Compare

 3.12 x 10^-6 s

(13 characters, not counting ^) with

 3.12 ?s

(7 characters).

HTH,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             ?Time flies like an arrow, fruit flies like a Banana.?

From chris.barker at noaa.gov  Fri Feb  6 16:55:06 2015
From: chris.barker at noaa.gov (Chris Barker - NOAA Federal)
Date: Fri, 6 Feb 2015 07:55:06 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
Message-ID: <-5269634369933126636@unknownmsgid>

> Right, by default it degrades to exact equality when one of the
> operands is zero, just as it will degrade to exact equality when the
> relative tolerance is set to zero. This should probably be stated
> explicitly in the PEP,

Will do.

- Chris

From barry at python.org  Fri Feb  6 18:20:00 2015
From: barry at python.org (Barry Warsaw)
Date: Fri, 6 Feb 2015 12:20:00 -0500
Subject: [Python-ideas] [PEP8] Predicate consistency
References: <54D3F082.8080803@oa-cagliari.inaf.it>
 <20150205191223.45837f66@anarchist.wooz.org>
 <54D45D76.3070106@oa-cagliari.inaf.it>
Message-ID: <20150206122000.670b0dcf@anarchist.wooz.org>

On Feb 06, 2015, at 07:21 AM, Marco Buttu wrote:

>I am not so crazy ;) I am just proposing a clear rule for predicates in order
>to avoid, _in the future_, inconsistency with the current general rule.

IMHO, there's nothing special about predicates.  All else being equal, new
APIs would benefit from using underscores between words for all multi-word
methods.  But sure, there are lots of reasons why some of our APIs don't
follow that guideline.  Some of those reasons are good, some are not so good
<wink>.

I still think it's the right guideline to promote.

Cheers,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150206/49b89678/attachment.sig>

From ncoghlan at gmail.com  Sat Feb  7 01:10:03 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 7 Feb 2015 10:10:03 +1000
Subject: [Python-ideas] [PEP8] Predicate consistency
In-Reply-To: <20150206122000.670b0dcf@anarchist.wooz.org>
References: <54D3F082.8080803@oa-cagliari.inaf.it>
 <20150205191223.45837f66@anarchist.wooz.org>
 <54D45D76.3070106@oa-cagliari.inaf.it>
 <20150206122000.670b0dcf@anarchist.wooz.org>
Message-ID: <CADiSq7e2AUza4TtusCT-yEW4LWCRZpRc4zvyC7tftgcvNSqNtQ@mail.gmail.com>

On 7 February 2015 at 03:20, Barry Warsaw <barry at python.org> wrote:
> On Feb 06, 2015, at 07:21 AM, Marco Buttu wrote:
>
>>I am not so crazy ;) I am just proposing a clear rule for predicates in order
>>to avoid, _in the future_, inconsistency with the current general rule.
>
> IMHO, there's nothing special about predicates.  All else being equal, new
> APIs would benefit from using underscores between words for all multi-word
> methods.  But sure, there are lots of reasons why some of our APIs don't
> follow that guideline.  Some of those reasons are good, some are not so good
> <wink>.

As one of my co-workers puts it: "sometimes things are the way they
are purely for hysterical raisins"* :)

Cheers,
Nick.

*I'm not sure how well the pun will translate for non-native English
speakers, so I'll be explicit that it sounds a lot like "historical
reasons" when said aloud :)

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From chris.barker at noaa.gov  Sat Feb  7 21:22:22 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Sat, 7 Feb 2015 12:22:22 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <54D4E6C5.1080804@egenix.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
 <54D4E6C5.1080804@egenix.com>
Message-ID: <CALGmxEJ5GWfnk6docXp85XLnS+EdqZum6Ve5UbgzBtM44iak3Q@mail.gmail.com>

On Friday, February 6, 2015, M.-A. Lemburg <mal at egenix.com> wrote:

> There seems to be a typo in the PEP:


Got it -- thanks.

>
> The github link in the PEP gives a 404:
>
> https://github.com/PythonCHB/close_pep/blob/master


Must be the link for when you're logged in -- this one will get you there:

https://github.com/PythonCHB/close_pep


> I don't really want to continue the bike shedding around the PEP, but
> just add a slightly different perspective which might be useful as
> additional comment in the PEP.


Thanks -- I'll see what I can add to the discussion.

Too bad you couldn't easily find the sample code -- your code is
essentially the "asymmetric test" in the sample. And with Nathanial Smith's
suggestion of a zero-tolerance. If the PEP isn't convincing enough as to
why I didn't decide to go that way, then suggested text would be
appreciated.

But in short -- there is more than one way to do it -- most of them would
work fine for most use-cases -- I finally settled on what appeared to be
the consensus for "least surprising".

-Chris
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150207/5f4f748d/attachment.html>

From rosuav at gmail.com  Sat Feb  7 21:30:31 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Sun, 8 Feb 2015 07:30:31 +1100
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxEJ5GWfnk6docXp85XLnS+EdqZum6Ve5UbgzBtM44iak3Q@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
 <54D4E6C5.1080804@egenix.com>
 <CALGmxEJ5GWfnk6docXp85XLnS+EdqZum6Ve5UbgzBtM44iak3Q@mail.gmail.com>
Message-ID: <CAPTjJmqCc9tMo5zUG8otA=J8jcAMar3qbVE7ud56FVEvDx0+3A@mail.gmail.com>

On Sun, Feb 8, 2015 at 7:22 AM, Chris Barker <chris.barker at noaa.gov> wrote:
>> The github link in the PEP gives a 404:
>>
>> https://github.com/PythonCHB/close_pep/blob/master
>
>
> Must be the link for when you're logged in -- this one will get you there:
>
> https://github.com/PythonCHB/close_pep
>

Or possibly you want to specify a file name:

https://github.com/PythonCHB/close_pep/blob/master/pep-0485.txt

ChrisA

From mertz at gnosis.cx  Sat Feb  7 21:56:45 2015
From: mertz at gnosis.cx (David Mertz)
Date: Sat, 7 Feb 2015 12:56:45 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
Message-ID: <CAEbHw4bJOz1w82=Nt3ZU=3o4h-fx9FKzE+dfS4ng=N2-rNLoKQ@mail.gmail.com>

+1 on PEP 485 as currently written.

Maybe it's not perfect in every edge case, but per Nick's standard, it is
significantly "less wrong" than 'a == b'.

On Fri, Feb 6, 2015 at 5:48 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:

> On 6 February 2015 at 23:35, Paul Moore <p.f.moore at gmail.com> wrote:
> > On 6 February 2015 at 12:28, Antoine Pitrou <solipsis at pitrou.net> wrote:
> >> Ok, more simply then: does is_close_to(0.0, 0.0) return True?
> >
> > From the formula in the PEP
> >
> >    """abs(a-b) <= max( rel_tolerance * min(abs(a), abs(b), abs_tolerance
> )"""
> >
> > yes it does. More generally, is_close_to(x, x) is always true. That's
> > a key requirement - that "closeness" includes equality.
> >
> > I think the "weirdness" around zero is simply that there's no x for
> > which is_close_to(x, 0) and x != 0. Which TBH isn't really all that
> > weird :-)
>
> Right, by default it degrades to exact equality when one of the
> operands is zero, just as it will degrade to exact equality when the
> relative tolerance is set to zero. This should probably be stated
> explicitly in the PEP, since it isn't immediately obvious from the
> formal definition.
>
> +1 on the PEP from me - it meets the goal I suggested of being clearly
> better for comparing floating point values than standard equality, and
> manages to hide most of the surprising complexity of floating point
> comparisons.
>
> Regards,
> Nick.
>
> --
> Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons.  Intellectual property is
to the 21st century what the slave trade was to the 16th.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150207/cb59cc71/attachment.html>

From edk141 at gmail.com  Sat Feb  7 23:22:44 2015
From: edk141 at gmail.com (Ed Kellett)
Date: Sat, 07 Feb 2015 22:22:44 +0000
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
Message-ID: <CABmzr0idfHk1Ae6X1FT3pUpXN3Y+-128ea2JE8mJ34Wdc_E3ag@mail.gmail.com>

On Fri Feb 06 2015 at 01:17:03 Nathaniel Smith <njs at pobox.com> wrote:
>
> Surely a.union(b) should produce the same result as a.update(b) except
> out-of-place, and .update() already has well-defined semantics for
> OrderedDicts.
>

I don't think they are equivalent. update() makes sense because it's
described as a procedure mutating one operand - among other implications,
it fundamentally doesn't commute. On the other hand, set union is
commutative, and changing that seems strange (and wrong) to me.

I think it'd be more sensible to add .extend() and + operations to ordered
sets if that's a useful thing to do to them. It mirrors the fact that they
are (more or less) mutable sequences, and it's more obvious what the order
of the result is (and + on sequences is already non-commutative).

In any case, I'm evidently wrong about this being a trivial decision. :)

Ed Kellett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150207/968176b6/attachment.html>

From rosuav at gmail.com  Sat Feb  7 23:36:48 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Sun, 8 Feb 2015 09:36:48 +1100
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
In-Reply-To: <CABmzr0idfHk1Ae6X1FT3pUpXN3Y+-128ea2JE8mJ34Wdc_E3ag@mail.gmail.com>
References: <CABmzr0idfHk1Ae6X1FT3pUpXN3Y+-128ea2JE8mJ34Wdc_E3ag@mail.gmail.com>
Message-ID: <CAPTjJmppB=JY02YA41J1+4DpxuG6wf70Mb4QJg1z5-rGxsAkAA@mail.gmail.com>

On Sun, Feb 8, 2015 at 9:22 AM, Ed Kellett <edk141 at gmail.com> wrote:
> On the other hand, set union is commutative, and changing that seems strange
> (and wrong) to me.

Set union is, but sets lack order. Once you add order to a set, it
becomes more like a list, so the union of x and y might not be
identical to the union of y and x. They'd be the same modulo order, so
it's not breaking the concept of set union.

ChrisA

From edk141 at gmail.com  Sun Feb  8 00:02:05 2015
From: edk141 at gmail.com (Ed Kellett)
Date: Sat, 07 Feb 2015 23:02:05 +0000
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
References: <CABmzr0idfHk1Ae6X1FT3pUpXN3Y+-128ea2JE8mJ34Wdc_E3ag@mail.gmail.com>
 <CAPTjJmppB=JY02YA41J1+4DpxuG6wf70Mb4QJg1z5-rGxsAkAA@mail.gmail.com>
Message-ID: <CABmzr0ge7dTjDrbAdp0SF=g6Kg=7pStUkR9+aXFRY_m9kOOEzA@mail.gmail.com>

On Sat Feb 07 2015 at 10:37:39 PM Chris Angelico <rosuav at gmail.com> wrote:

> Set union is, but sets lack order. Once you add order to a set, it
> becomes more like a list


Then wouldn't adding .extend() and + make more sense anyway? List-like
operations should look like the list-like operations we already have.


> so the union of x and y might not be identical to the union of y and x


Well, not if you define union to be non-commutative, no?but that's what
we're discussing.


> They'd be the same modulo order, so it's not breaking the concept of set
> union.


Sure it is: operations are supposed to yield things that make sense. Set
union yields the set of elements contained in any of its operands; even if
its operands have orderings that make sense, the ordering between sets
might be partial or inconsistent, so the result cannot. So the existing
definition of set union can never produce a correct ordering.

I can't see why it would be preferable to redefine union as concatenation
rather than just supporting explicit concatenation. Apart from feeling more
mathematically sound to me, it's better at documenting intention: which
method (or operator) you choose reflects whether you're interested in the
order of the result or just its contents.

Ed Kellett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150207/af3e43d5/attachment.html>

From mistersheik at gmail.com  Sun Feb  8 00:10:52 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Sat, 7 Feb 2015 18:10:52 -0500
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
In-Reply-To: <CABmzr0ge7dTjDrbAdp0SF=g6Kg=7pStUkR9+aXFRY_m9kOOEzA@mail.gmail.com>
References: <CABmzr0idfHk1Ae6X1FT3pUpXN3Y+-128ea2JE8mJ34Wdc_E3ag@mail.gmail.com>
 <CAPTjJmppB=JY02YA41J1+4DpxuG6wf70Mb4QJg1z5-rGxsAkAA@mail.gmail.com>
 <CABmzr0ge7dTjDrbAdp0SF=g6Kg=7pStUkR9+aXFRY_m9kOOEzA@mail.gmail.com>
Message-ID: <CAA68w_=0tsDXhqN6sjf4RMjzABqjC4zytO4agrY3Kc25y0qXpg@mail.gmail.com>

On Sat, Feb 7, 2015 at 6:02 PM, Ed Kellett <edk141 at gmail.com> wrote:

> On Sat Feb 07 2015 at 10:37:39 PM Chris Angelico <rosuav at gmail.com> wrote:
>
>> Set union is, but sets lack order. Once you add order to a set, it
>> becomes more like a list
>
>
> Then wouldn't adding .extend() and + make more sense anyway? List-like
> operations should look like the list-like operations we already have.
>
>
>> so the union of x and y might not be identical to the union of y and x
>
>
> Well, not if you define union to be non-commutative, no?but that's what
> we're discussing.
>
>
>> They'd be the same modulo order, so it's not breaking the concept of set
>> union.
>
>
> Sure it is: operations are supposed to yield things that make sense. Set
> union yields the set of elements contained in any of its operands; even if
> its operands have orderings that make sense, the ordering between sets
> might be partial or inconsistent, so the result cannot. So the existing
> definition of set union can never produce a correct ordering.
>
> I can't see why it would be preferable to redefine union as concatenation
> rather than just supporting explicit concatenation. Apart from feeling more
> mathematically sound to me, it's better at documenting intention: which
> method (or operator) you choose reflects whether you're interested in the
> order of the result or just its contents.
>

The reason is that that's already what update does, which is the most
common way to use OrderedSet.  Typically, you have a bunch of things and
you want a set of unique elements in the order they were added.  So you
create your accumulator and then update it with iterables of elements,
after which it contains your desired ordered set of unique elements.  Maybe
your code looks like this:

a = OrderedSet()

a.update(b)

a.update(c)

a.update(d)

Why shouldn't that be the same as

a | b | c | d

?

I think it should and that in general union should be equivalent in effect
to copy and extend.

Best,

Neil

>
> Ed Kellett
>
> --
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "python-ideas" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/python-ideas/4WZbMC2pNe0/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> python-ideas+unsubscribe at googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
> --
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "python-ideas" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/python-ideas/4WZbMC2pNe0/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> python-ideas+unsubscribe at googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150207/7b77aad5/attachment.html>

From edk141 at gmail.com  Sun Feb  8 00:24:09 2015
From: edk141 at gmail.com (Ed Kellett)
Date: Sat, 07 Feb 2015 23:24:09 +0000
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
References: <CABmzr0idfHk1Ae6X1FT3pUpXN3Y+-128ea2JE8mJ34Wdc_E3ag@mail.gmail.com>
 <CAPTjJmppB=JY02YA41J1+4DpxuG6wf70Mb4QJg1z5-rGxsAkAA@mail.gmail.com>
 <CABmzr0ge7dTjDrbAdp0SF=g6Kg=7pStUkR9+aXFRY_m9kOOEzA@mail.gmail.com>
 <CAA68w_=0tsDXhqN6sjf4RMjzABqjC4zytO4agrY3Kc25y0qXpg@mail.gmail.com>
Message-ID: <CABmzr0jWSSk0n8VYsZzTG57x0osKs70RSJJ+jBGUh2Y6ZP-3Hw@mail.gmail.com>

On Sat Feb 07 2015 at 11:12:02 PM Neil Girdhar <mistersheik at gmail.com>
wrote:
>
> Why shouldn't that be the same as
>
> a | b | c | d
>
> ?
>
> I think it should and that in general union should be equivalent in effect
> to copy and extend.
>

Because yielding an ordering doesn't have any basis in the definition of
union, and it's not any easier to write than a + b + c + d. "copy and
extend" is just concatenation - is there any reason *not* to use the
concatenation operator for it, rather than the union?

Ed Kellett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150207/71a6ba37/attachment.html>

From abarnert at yahoo.com  Sun Feb  8 01:18:38 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Sat, 7 Feb 2015 16:18:38 -0800
Subject: [Python-ideas] Add orderedset as set(iterable, *,
	ordered=False) and similarly for frozenset.
In-Reply-To: <CABmzr0jWSSk0n8VYsZzTG57x0osKs70RSJJ+jBGUh2Y6ZP-3Hw@mail.gmail.com>
References: <CABmzr0idfHk1Ae6X1FT3pUpXN3Y+-128ea2JE8mJ34Wdc_E3ag@mail.gmail.com>
 <CAPTjJmppB=JY02YA41J1+4DpxuG6wf70Mb4QJg1z5-rGxsAkAA@mail.gmail.com>
 <CABmzr0ge7dTjDrbAdp0SF=g6Kg=7pStUkR9+aXFRY_m9kOOEzA@mail.gmail.com>
 <CAA68w_=0tsDXhqN6sjf4RMjzABqjC4zytO4agrY3Kc25y0qXpg@mail.gmail.com>
 <CABmzr0jWSSk0n8VYsZzTG57x0osKs70RSJJ+jBGUh2Y6ZP-3Hw@mail.gmail.com>
Message-ID: <B6965453-3F68-44F2-820F-7E9C95C9A89A@yahoo.com>

On Feb 7, 2015, at 15:24, Ed Kellett <edk141 at gmail.com> wrote:

> On Sat Feb 07 2015 at 11:12:02 PM Neil Girdhar <mistersheik at gmail.com> wrote:
>> 
>> Why shouldn't that be the same as
>> 
>> a | b | c | d
>> 
>> ?
>> 
>> I think it should and that in general union should be equivalent in effect to copy and extend.
> 
> Because yielding an ordering doesn't have any basis in the definition of union, and it's not any easier to write than a + b + c + d. "copy and extend" is just concatenation - is there any reason *not* to use the concatenation operator for it, rather than the union?

In a+b, I'd expect that all of the elements of b will appear in the result in the same order they appear in b. That's how it works for sequences. But it can't work that way for ordered sets if there are any elements of b also in a.

More importantly, that's the entire point of ordered sets. If you wanted to preserve (or didn't care about) duplicates, you'd just use a list.

What you actually want here is an "unsorted union"--all the elements of a in order, then all the elements of b except those also in a in order, and so on. (That's "unsorted" as opposed to a sorted union, where the sequences are interleaved in such a way as to preserve a shared ordering rule.) It seems a lot more natural to call that "union" than concatenate. But it might be better to explicitly call it "unsorted union", or even to just not provide it. (IIRC, in Mathematica, you do it by concatenating the sets into a list, then calling DeleteDuplicates to turn the result back into an ordered set, or by calling Tally on the concatenated list and selecting the ones whose tally is 1, or various other equivalents.)

The fact that unordered union isn't commutative doesn't mean it's not a reasonable extension of union. The best-known extension of union is probably the disjoint union, which also isn't commutative: {1, 2} U* {0, 1} = {(1,0), (2,0), (0,1), (1,2)} but {0, 1} U* {1, 2} = {(0, 0), (1, 0), (1, 1), (2, 1)}.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150207/11d725da/attachment-0001.html>

From mistersheik at gmail.com  Sun Feb  8 01:34:27 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Sat, 7 Feb 2015 19:34:27 -0500
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
In-Reply-To: <CABmzr0jWSSk0n8VYsZzTG57x0osKs70RSJJ+jBGUh2Y6ZP-3Hw@mail.gmail.com>
References: <CABmzr0idfHk1Ae6X1FT3pUpXN3Y+-128ea2JE8mJ34Wdc_E3ag@mail.gmail.com>
 <CAPTjJmppB=JY02YA41J1+4DpxuG6wf70Mb4QJg1z5-rGxsAkAA@mail.gmail.com>
 <CABmzr0ge7dTjDrbAdp0SF=g6Kg=7pStUkR9+aXFRY_m9kOOEzA@mail.gmail.com>
 <CAA68w_=0tsDXhqN6sjf4RMjzABqjC4zytO4agrY3Kc25y0qXpg@mail.gmail.com>
 <CABmzr0jWSSk0n8VYsZzTG57x0osKs70RSJJ+jBGUh2Y6ZP-3Hw@mail.gmail.com>
Message-ID: <CAA68w_mGex4wBic44y9HobSbTjVouRxPd4diVY-YH2PWuO+1UA@mail.gmail.com>

You make a good point, but I'm still unfortunately unconvinced.  If it were
called UniqueList, I would agree with you.  Since it's called OrderedSet, I
expect it to use the same operations as set.   Also, typically I implement
something using set and then after the fact realize that I need ordering.
I then change the declaration to OrderedSet.  With your solution, I would
have to also change | and |= to + and +=.  I feel that update should remain
equivalent to |= rather than switching to +=,

Best,

Neil

On Sat, Feb 7, 2015 at 6:24 PM, Ed Kellett <edk141 at gmail.com> wrote:

> On Sat Feb 07 2015 at 11:12:02 PM Neil Girdhar <mistersheik at gmail.com>
> wrote:
>>
>> Why shouldn't that be the same as
>>
>> a | b | c | d
>>
>> ?
>>
>> I think it should and that in general union should be equivalent in
>> effect to copy and extend.
>>
>
> Because yielding an ordering doesn't have any basis in the definition of
> union, and it's not any easier to write than a + b + c + d. "copy and
> extend" is just concatenation - is there any reason *not* to use the
> concatenation operator for it, rather than the union?
>
> Ed Kellett
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150207/636a6234/attachment.html>

From edk141 at gmail.com  Sun Feb  8 01:37:19 2015
From: edk141 at gmail.com (Ed Kellett)
Date: Sun, 08 Feb 2015 00:37:19 +0000
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
References: <CABmzr0idfHk1Ae6X1FT3pUpXN3Y+-128ea2JE8mJ34Wdc_E3ag@mail.gmail.com>
 <CAPTjJmppB=JY02YA41J1+4DpxuG6wf70Mb4QJg1z5-rGxsAkAA@mail.gmail.com>
 <CABmzr0ge7dTjDrbAdp0SF=g6Kg=7pStUkR9+aXFRY_m9kOOEzA@mail.gmail.com>
 <CAA68w_=0tsDXhqN6sjf4RMjzABqjC4zytO4agrY3Kc25y0qXpg@mail.gmail.com>
 <CABmzr0jWSSk0n8VYsZzTG57x0osKs70RSJJ+jBGUh2Y6ZP-3Hw@mail.gmail.com>
 <B6965453-3F68-44F2-820F-7E9C95C9A89A@yahoo.com>
Message-ID: <CABmzr0ipWGzdDCc4QgnokXM97ZX=zVmyfMMujy8krT619cBMuQ@mail.gmail.com>

On Sun Feb 08 2015 at 12:18:41 AM Andrew Barnert <abarnert at yahoo.com> wrote:
>
> In a+b, I'd expect that all of the elements of b will appear in the result
> in the same order they appear in b. That's how it works for sequences. But
> it can't work that way for ordered sets if there are any elements of b also
> in a.
>

Well, a + b would add elements of b to a in the order they appear in b?they
just wouldn't appear there.


> More importantly, that's the entire point of ordered sets. If you wanted
> to preserve (or didn't care about) duplicates, you'd just use a list.
>

And that's why a + b behaving that way makes sense. To me.


> What you actually want here is an "unsorted union"--all the elements of a
> in order, then all the elements of b except those also in a in order, and
> so on. (That's "unsorted" as opposed to a sorted union, where the sequences
> are interleaved in such a way as to preserve a shared ordering rule.) It
> seems a lot more natural to call that "union" than concatenate. But it
> might be better to explicitly call it "unsorted union", or even to just not
> provide it. (IIRC, in Mathematica, you do it by concatenating the sets into
> a list, then calling DeleteDuplicates to turn the result back into an
> ordered set, or by calling Tally on the concatenated list and selecting the
> ones whose tally is 1, or various other equivalents.)
>

Well, I don't really understand why it seems natural to call it union
rather than concatenate - it is appending each element in order, which is
exactly what concatenation is.

I think this might just be a fundamental difference between our mental
models of sets and union, though, so there's probably little point arguing
about it. I think "unsorted union" is reasonable.


> The fact that unordered union isn't commutative doesn't mean it's not a
> reasonable extension of union. The best-known extension of union is
> probably the disjoint union, which also isn't commutative: {1, 2} U* {0, 1}
> = {(1,0), (2,0), (0,1), (1,2)} but {0, 1} U* {1, 2} = {(0, 0), (1, 0), (1,
> 1), (2, 1)}.
>

But disjoint union has even less in common with union than unsorted union,
and I don't think anybody drops the "disjoint" and conflates it with
regular union.

Ed Kellett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150208/fce6584e/attachment.html>

From rosuav at gmail.com  Sun Feb  8 01:50:33 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Sun, 8 Feb 2015 11:50:33 +1100
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
In-Reply-To: <CAA68w_mGex4wBic44y9HobSbTjVouRxPd4diVY-YH2PWuO+1UA@mail.gmail.com>
References: <CABmzr0idfHk1Ae6X1FT3pUpXN3Y+-128ea2JE8mJ34Wdc_E3ag@mail.gmail.com>
 <CAPTjJmppB=JY02YA41J1+4DpxuG6wf70Mb4QJg1z5-rGxsAkAA@mail.gmail.com>
 <CABmzr0ge7dTjDrbAdp0SF=g6Kg=7pStUkR9+aXFRY_m9kOOEzA@mail.gmail.com>
 <CAA68w_=0tsDXhqN6sjf4RMjzABqjC4zytO4agrY3Kc25y0qXpg@mail.gmail.com>
 <CABmzr0jWSSk0n8VYsZzTG57x0osKs70RSJJ+jBGUh2Y6ZP-3Hw@mail.gmail.com>
 <CAA68w_mGex4wBic44y9HobSbTjVouRxPd4diVY-YH2PWuO+1UA@mail.gmail.com>
Message-ID: <CAPTjJmp6n7mbMx6opciyhgsMMDn3TLMedRofR8yR57aZDMUSqQ@mail.gmail.com>

On Sun, Feb 8, 2015 at 11:34 AM, Neil Girdhar <mistersheik at gmail.com> wrote:
> You make a good point, but I'm still unfortunately unconvinced.  If it were
> called UniqueList, I would agree with you.  Since it's called OrderedSet, I
> expect it to use the same operations as set.

To be quite honest, I would be very happy if sets had more of the
operations that dictionaries have. Imagine if "set" were "dictionary
where all the values are True", and "OrderedSet" were "OrderedDict
where all the values are True"; combining sets is like combining
dictionaries, but you don't have to worry about merging the values.

ISTM ordered set addition/union should be like list concatenation with
duplicates dropped. But then, I'm not a mathematician; which, come to
think of it, is why I use "Practicality beats purity" Python rather
than one of the more mathematically-pure functional languages.

ChrisA

From edk141 at gmail.com  Sun Feb  8 01:53:33 2015
From: edk141 at gmail.com (Ed Kellett)
Date: Sun, 08 Feb 2015 00:53:33 +0000
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
References: <CABmzr0idfHk1Ae6X1FT3pUpXN3Y+-128ea2JE8mJ34Wdc_E3ag@mail.gmail.com>
 <CAPTjJmppB=JY02YA41J1+4DpxuG6wf70Mb4QJg1z5-rGxsAkAA@mail.gmail.com>
 <CABmzr0ge7dTjDrbAdp0SF=g6Kg=7pStUkR9+aXFRY_m9kOOEzA@mail.gmail.com>
 <CAA68w_=0tsDXhqN6sjf4RMjzABqjC4zytO4agrY3Kc25y0qXpg@mail.gmail.com>
 <CABmzr0jWSSk0n8VYsZzTG57x0osKs70RSJJ+jBGUh2Y6ZP-3Hw@mail.gmail.com>
 <CAA68w_mGex4wBic44y9HobSbTjVouRxPd4diVY-YH2PWuO+1UA@mail.gmail.com>
Message-ID: <CABmzr0iXq3JAMUrVbaehGmBpQ_1h8KKB-nVbpZ5oZnf9brw0mQ@mail.gmail.com>

On Sun Feb 08 2015 at 12:34:48 AM Neil Girdhar <mistersheik at gmail.com>
wrote:

> If it were called UniqueList, I would agree with you.  Since it's called
> OrderedSet, I expect it to use the same operations as set.
>

I agree, and with that in mind, it seems wrong for it to redefine what
"union" means.


> Also, typically I implement something using set and then after the fact
> realize that I need ordering.  I then change the declaration to
> OrderedSet.  With your solution, I would have to also change | and |= to +
> and +=.  I feel that update should remain equivalent to |= rather than
> switching to +=,
>

That's a convenient case if you happened to |= other sets (which you also
had to change to OrderedSet) in in the correct order already, but if you
didn't or you wanted some particular interesting ordering or you were
getting your sets to add from somewhere that didn't deal in OrderedSets,
you'd have to change that part of your program to support ordering anyway.

On the other hand, if you were .update()ing directly from some other
ordered iterable, you wouldn't have to change anything. :)

Ed Kellett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150208/4cda0856/attachment-0001.html>

From edk141 at gmail.com  Sun Feb  8 02:07:38 2015
From: edk141 at gmail.com (Ed Kellett)
Date: Sun, 08 Feb 2015 01:07:38 +0000
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
References: <CABmzr0idfHk1Ae6X1FT3pUpXN3Y+-128ea2JE8mJ34Wdc_E3ag@mail.gmail.com>
 <CAPTjJmppB=JY02YA41J1+4DpxuG6wf70Mb4QJg1z5-rGxsAkAA@mail.gmail.com>
 <CABmzr0ge7dTjDrbAdp0SF=g6Kg=7pStUkR9+aXFRY_m9kOOEzA@mail.gmail.com>
 <CAA68w_=0tsDXhqN6sjf4RMjzABqjC4zytO4agrY3Kc25y0qXpg@mail.gmail.com>
 <CABmzr0jWSSk0n8VYsZzTG57x0osKs70RSJJ+jBGUh2Y6ZP-3Hw@mail.gmail.com>
 <CAA68w_mGex4wBic44y9HobSbTjVouRxPd4diVY-YH2PWuO+1UA@mail.gmail.com>
 <CAPTjJmp6n7mbMx6opciyhgsMMDn3TLMedRofR8yR57aZDMUSqQ@mail.gmail.com>
Message-ID: <CABmzr0ijEXaEtY6nQgdRmG=rDpQfjiKHUdR1iVsmk=z8T=SL3g@mail.gmail.com>

On Sun Feb 08 2015 at 12:52:56 AM Chris Angelico <rosuav at gmail.com> wrote:

> To be quite honest, I would be very happy if sets had more of the
> operations that dictionaries have. Imagine if "set" were "dictionary
> where all the values are True", and "OrderedSet" were "OrderedDict
> where all the values are True"; combining sets is like combining
> dictionaries, but you don't have to worry about merging the values.
>

I think they already have most of the useful ones. Adding True values seems
unnecessary, to me?that entries have no value seems more appropriate in
terms of merging them making sense.


> ISTM ordered set addition/union should be like list concatenation with
> duplicates dropped.


I agree about addition - that'd mirror list concatenation nicely. But with
that in mind, making union do the same thing seems wrong - partly because
it'd be an exact duplicate of other functionality, and partly because it's
a special case:

- `set | set` has no meaningful ordering
- `orderedset | set` has no meaningful ordering
- `set | orderedset` has no meaningful ordering
- `orderedset | orderedset` has a meaningful ordering if you expect union
to act like update(). Maybe that's more common than I thought; I always
considered them fundamentally different concepts.

Ed Kellett
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150208/d139a668/attachment.html>

From rosuav at gmail.com  Sun Feb  8 02:23:17 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Sun, 8 Feb 2015 12:23:17 +1100
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
In-Reply-To: <CABmzr0ijEXaEtY6nQgdRmG=rDpQfjiKHUdR1iVsmk=z8T=SL3g@mail.gmail.com>
References: <CABmzr0idfHk1Ae6X1FT3pUpXN3Y+-128ea2JE8mJ34Wdc_E3ag@mail.gmail.com>
 <CAPTjJmppB=JY02YA41J1+4DpxuG6wf70Mb4QJg1z5-rGxsAkAA@mail.gmail.com>
 <CABmzr0ge7dTjDrbAdp0SF=g6Kg=7pStUkR9+aXFRY_m9kOOEzA@mail.gmail.com>
 <CAA68w_=0tsDXhqN6sjf4RMjzABqjC4zytO4agrY3Kc25y0qXpg@mail.gmail.com>
 <CABmzr0jWSSk0n8VYsZzTG57x0osKs70RSJJ+jBGUh2Y6ZP-3Hw@mail.gmail.com>
 <CAA68w_mGex4wBic44y9HobSbTjVouRxPd4diVY-YH2PWuO+1UA@mail.gmail.com>
 <CAPTjJmp6n7mbMx6opciyhgsMMDn3TLMedRofR8yR57aZDMUSqQ@mail.gmail.com>
 <CABmzr0ijEXaEtY6nQgdRmG=rDpQfjiKHUdR1iVsmk=z8T=SL3g@mail.gmail.com>
Message-ID: <CAPTjJmr8v4jpwKyL1UCOdUjfHupP5GmPDfWboz=xK_RmsZBXgw@mail.gmail.com>

On Sun, Feb 8, 2015 at 12:07 PM, Ed Kellett <edk141 at gmail.com> wrote:
> On Sun Feb 08 2015 at 12:52:56 AM Chris Angelico <rosuav at gmail.com> wrote:
>>
>> To be quite honest, I would be very happy if sets had more of the
>> operations that dictionaries have. Imagine if "set" were "dictionary
>> where all the values are True", and "OrderedSet" were "OrderedDict
>> where all the values are True"; combining sets is like combining
>> dictionaries, but you don't have to worry about merging the values.
>
>
> I think they already have most of the useful ones. Adding True values seems
> unnecessary, to me?that entries have no value seems more appropriate in
> terms of merging them making sense.

I don't know that *actually* having a True value for everything would
necessarily help, but in terms of defining operations like union and
intersection, it should be broadly equivalent to solving the same
problem with a dict/OrderedDict.

ChrisA

From raymond.hettinger at gmail.com  Sun Feb  8 04:21:44 2015
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Sat, 7 Feb 2015 19:21:44 -0800
Subject: [Python-ideas] Add orderedset as set(iterable, *,
	ordered=False) and similarly for frozenset.
In-Reply-To: <CAA68w_m5kUGaaxd8n7Gt33gjee92-0XnX2WxKY+UR=BZZ5jEYg@mail.gmail.com>
References: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>
 <B01C16BD-FF3F-4668-9A45-ADBC8AF2512C@gmail.com>
 <CAA68w_m5kUGaaxd8n7Gt33gjee92-0XnX2WxKY+UR=BZZ5jEYg@mail.gmail.com>
Message-ID: <D0474C4F-F088-4DFE-B5F3-5EABC0B430B0@gmail.com>


> * It isn't entirely clear what the set-to-set operations should do to maintain order, whether equality tests should require matching order, whether we care about loss of commutativity, and whether we're willing to forgo the usual optimizations on set-to-set operations (i.e. the intersection algorithm loops over the smaller of the two input sets and checks membership in the larger set -- that is fast, but it would lose order if the second set were the smaller of the two).
> 
>> I agree it's not obvious, but I think that with a little debate we could come together on a reasonable choice.

"In the face of ambiguity, refuse the temptation to guess."   Some of the participants on python-ideas need to develop a much stronger aversion to opening a can of worms ;-)


>  You could even block the set-to-set operations for now.  The main use of OrderedSet just requires add, update, and iteration as your recipe illustrates.

Since you don't seem have any specific requirements for the order created by set-to-set operations, then just use the existing set operations on OrderedDict.keys():

    >>> a = OrderedDict.fromkeys(['raymond', 'rachel', 'matthew'])
    >>> b = OrderedDict.fromkeys(['matthew', 'calvin', 'rachel'])
    >>> a.keys() & b.keys()
    {'rachel', 'matthew'}
    >>> a.keys() | b.keys()
    {'rachel', 'calvin', 'matthew', 'raymond'}
    >>> a.keys() - b.keys()
    {'raymond'}
    >>> a.keys() ^ b.keys()
    {'calvin', 'raymond'}
    >>> list(a.keys())
    ['raymond', 'rachel', 'matthew']


> 
> * The standard library should provide fast, clean fundamental building blocks and defer all of the myriad of variants and derivations to PyPI.   We already provide OrderedDict and MutableSet so that rolling your own OrderedSet isn't hard (see the recipe below).
> 
> I think that OrderedSet is a fundamental building block :)

The rule for "fundamental" that I've used for itertools is whether the thing you want can easily be built out of what you've already got.   For example, you can't make accumulate() out of the other itertools, but you can make tabulate(func) from map(func, count()).  Likewise, you can easily make an OrderedSet from an OrderedDict and MutableSet, or if you're needs are less, just use the existing set operations on dict views as shown above. 


> * We're likely to soon have an OrderedDict written in C (I believe Eric Snow has been working on that for some time), so anything build on top of an OrderedDict will inherit all the speed benefits. 
> 
> Sure.  It's also easy to have an OrderedSet written in C (especially once OrderedDict is done.)

You're profoundly under-estimating how much work it is to build and maintain a separate C implementation.  I've maintained the code for collections and sets for over a decade and can assure you that the maintenance burden has been enormous.

 
> Also, there is some chance that PyPy folks will contribute their work making regular sets and dicts ordered by default; in which case, having a separate OrderedSet would have been a waste and the question would be moot.
> 
> I think that using dicts and sets and assuming that they're ordered because you plan on using PyPy is a terrible error.

Perhaps, I wasn't clear.  The PyPy folks have offered to port their work back into Python.  They're offering a nice combination of faster performance (especially for iteration), better memory utilization, and as a by-product retaining insertion order.  If their offer comes to fruition, then this whole discussion is moot.

> 
> To be honest, my main issue is my personal need.  

The usual solution for personal needs is to have a personal utils module (it sounds like you've been using my recipe for over a year).  Decisions about what to put in the standard library have far reaching impact and we need to consider what is best for most of the users most of the time (and to consider our own maintenance burden).

I thought ordered sets were interesting enough to write an implementation six years ago, but frankly there are many other other things that have a much better case for being added to collections (for example, some sort of graph structure, some kind of binary tree, some kind of trie, or perhaps a bloom filter).


> If there was a complete PyPI recipe, I would be very happy.  

Then, that is what we should do.


> I don't think I'm the only one, but maybe you're right and I am.
>  
> 
> * One thing I've used to test the waters is to include a link in the documentation to an OrderedSet recipe (it's in the "See also" section at the very bottom of the page at https://docs.python.org/2/library/collections.html ).  As far as I can tell, there has been nearly zero uptake based on this link.  That gives us a good indication that there is very little real-world need for an OrderedSet.
> 
> I've been using your recipe for at least a year (thank you).  How are you measuring uptake?


That is a very difficult question to answer (there is no precise tool for saying X people care about thing Y).  The inputs I do have are imprecise and incomplete but never-the-less still informative.  When google's code search was still alive, there was a direct way of seeing what people used in published code, and I frequently searched to see what people were using in the python world and how they were using it (for example, that is how I was able to determine that str.swapcase() was nearly useless and propose that it be removed for Python 3).

Since then, I rely on a number of other sources such as reading the python bug tracker every day for the last 15 years to know what people are requesting.  Likewise, as frequent contributor to stack overflow you can see what people are asking about.  As a person who does python code reviews and teaches python courses for a living, I get to see into the code base for many companies.  Also, I spend time reading the source for popular python tools to see what their requirements are. Another source is reviewing what is included in toolsets from Continuum and Enthought to see what their client's have asked for.  None of this is precise or sufficiently inclusive, but it does give me some basis for finding out what people care about.


>> 
>> What I suggest is adding the recipe (like the one shown below) directly to the docs in this section:
>> https://docs.python.org/2/library/collections.html#ordereddict-examples-and-recipes
>> There we already have code for a LastUpdatedOrderedDict() and an OrderedCounter().
>> 
>> In the past, I've used these recipes as a way to gauge interest.  When the interest was sufficient, it got promoted from a recipe in the docs to a full implementation living in a standard library module.  For example, itertools.combinations_with_replacement() and itertools.accumulate() started out as recipes in the docs.
>> 
>> In general, there shouldn't be a rush to push more things in the standard library.  We get many suggestions in the form, "it might be nice to have XXX in the standard library", however, in many cases there has been so little uptake in real-world code that it is isn't worth the maintenance burden or for the burden on the working programmer for having too many choices in the standard library (I encounter very few people who actually know most of what is available in the standard library).
> 
> I agree.

Sounds good.  I think we can put an OrderedSet() recipe in the docs and another out on PyPI, and be done with it.


>  
> 
> PyPI is a successful secondary source of code and has become part of the working programmer's life.  We're gradually making it easier and easier to pull that code on-demand.  IMO, that is where most of the collections variants should live. Otherwise, it would be easy to fill the collections module with dozens of rarely used dict/set variants, red-black trees, pairing heaps, tries, graph implementations, bitarrays, bitsets, persistent collections, etc.)
> 
> my-two-cents-ly,
> 
> 
> 
> Agreed.
> 
> It would be nice to complete one of the recipes (oset or ordered-set) ? at least for me.

Completing a recipe is more reasonable than pushing for this to be in the standard library.


Raymond



From raymond.hettinger at gmail.com  Sun Feb  8 04:26:01 2015
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Sat, 7 Feb 2015 19:26:01 -0800
Subject: [Python-ideas]  Add orderedset as set(iterable, *,
	ordered=False) and similarly for frozenset.
References: <B01C16BD-FF3F-4668-9A45-ADBC8AF2512C@gmail.com>
Message-ID: <575ABDC0-EB76-4240-97D2-BCB4D88306E5@gmail.com>

Resending this earlier post.  It was mistakenly sent to python-ideas at googlegroups.com instead of python-ideas at python.org. 

Sorry for the confusion and for the posts arriving out-of-sequence.

----------------------

> On Feb 4, 2015, at 1:14 PM, Neil Girdhar <mistersheik at gmail.com> wrote:
> 
> Hundreds of people want an orderedset (http://stackoverflow.com/questions/1653970/does-python-have-an-ordered-set) and yet the ordered sets that are in PyPI (oset, ordered-set) are incomplete.  Please consider adding an ordered set that has *all* of the same functionality as set.
> 
> The only major design decision is (I think) whether to store the elements in a linked list or dynamic vector.
> 
> My personal preference is to specify the ordered set via a flag to the constructor, which can be intercepted by __new__ to return a different object type as necessary.
> 
> (If anyone wants to add a complete ordered set to PyPI that would also be very useful for me!)

Long ago, I ago considered adding an OrderedSet() and decided to hold-off for several reasons:

* The primary use cases for sets don't need order.  The main use cases being: uniquification, fast membership testing, and classic set-to-set operations (union, intersection, and difference).  

* As far as I can tell, the primary benefit of an ordered set is that it is easier on the eyes when interactively experimenting with trivial schoolbook examples.  That said, an OrderedSet __repr__ looks uglier because it would lack the {10,20,30} style literals we have for regular sets).

* It isn't entirely clear what the set-to-set operations should do to maintain order, whether equality tests should require matching order, whether we care about loss of commutativity, and whether we're willing to forgo the usual optimizations on set-to-set operations (i.e. the intersection algorithm loops over the smaller of the two input sets and checks membership in the larger set -- that is fast, but it would lose order if the second set were the smaller of the two).

* The standard library should provide fast, clean fundamental building blocks and defer all of the myriad of variants and derivations to PyPI.   We already provide OrderedDict and MutableSet so that rolling your own OrderedSet isn't hard (see the recipe below).

* We're likely to soon have an OrderedDict written in C (I believe Eric Snow has been working on that for some time), so anything build on top of an OrderedDict will inherit all the speed benefits.   Also, there is some chance that PyPy folks will contribute their work making regular sets and dicts ordered by default; in which case, having a separate OrderedSet would have been a waste and the question would be moot.

* Also, I wouldn't read too much in the point score on the StackOverflow question.  First, the question is very old and predates simpler solutions now available using OrderedDict and MutableMapping.  Second, StackOverflow tends to award high scores to the simplest questions and answers (such as http://stackoverflow.com/a/8538295/1001643 ).  A high score indicates interest but not a need to change the language.  A much better indicator would be the frequent appearance of ordered sets in real-world code or high download statistics from PyPI or ActiveState's ASPN cookbook.

* One thing I've used to test the waters is to include a link in the documentation to an OrderedSet recipe (it's in the "See also" section at the very bottom of the page at https://docs.python.org/2/library/collections.html ).  As far as I can tell, there has been nearly zero uptake based on this link.  That gives us a good indication that there is very little real-world need for an OrderedSet.

What I suggest is adding the recipe (like the one shown below) directly to the docs in this section:
https://docs.python.org/2/library/collections.html#ordereddict-examples-and-recipes
There we already have code for a LastUpdatedOrderedDict() and an OrderedCounter().

In the past, I've used these recipes as a way to gauge interest.  When the interest was sufficient, it got promoted from a recipe in the docs to a full implementation living in a standard library module.  For example, itertools.combinations_with_replacement() and itertools.accumulate() started out as recipes in the docs.

In general, there shouldn't be a rush to push more things in the standard library.  We get many suggestions in the form, "it might be nice to have XXX in the standard library", however, in many cases there has been so little uptake in real-world code that it is isn't worth the maintenance burden or for the burden on the working programmer for having too many choices in the standard library (I encounter very few people who actually know most of what is available in the standard library).

PyPI is a successful secondary source of code and has become part of the working programmer's life.  We're gradually making it easier and easier to pull that code on-demand.  IMO, that is where most of the collections variants should live. Otherwise, it would be easy to fill the collections module with dozens of rarely used dict/set variants, red-black trees, pairing heaps, tries, graph implementations, bitarrays, bitsets, persistent collections, etc.)

my-two-cents-ly,


Raymond



------- Rolling your own from existing parts --------------

from collections import OrderedDict, MutableSet

class OrderedSet(MutableSet):

   def __init__(self, iterable=()):
       self._data = OrderedDict.fromkeys(iterable)

   def add(self, key):
       self._data[key] = None

   def discard(self, key):
       self._data.pop(key, None)

   def __len__(self):
       return len(self._data)

   def __contains__(self, key):
       return key in self._data

   def __iter__(self):
       return iter(self._data)

   def __repr__(self):
       return '%s(%r)' % (self.__class__.__name__, set(self))


From jim.baker at python.org  Sun Feb  8 05:02:37 2015
From: jim.baker at python.org (Jim Baker)
Date: Sat, 7 Feb 2015 21:02:37 -0700
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
In-Reply-To: <CAGmFidZsd9JfQb3W9cRw-cob4pWT8y-HBMu_z0zwknxEQsCwDw@mail.gmail.com>
References: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>
 <CALGmxEKJje9qLiN+jP-iK0c-G-Q3n-rRGCvdqriDqVX-rEWcgw@mail.gmail.com>
 <CAGmFidZsd9JfQb3W9cRw-cob4pWT8y-HBMu_z0zwknxEQsCwDw@mail.gmail.com>
Message-ID: <CAOhO=aOazZf_9EK3p3FQ+wOxVUcEvy3nJVEpXb9=752A_010Dw@mail.gmail.com>

Interesting. It's more than a couple of lines of code to change for Jython
(I tried), but now that we require Java 7, we should be able to implement
identical ordered behavior to PyPy for dict, __dict__, set, and frozenset
using ConcurrentSkipListMap instead of our current usage of
ConcurrentHashMap. (Other collections like defaultdict and weakref
collections use Google Guava's MapMaker, which doesn't have an ordered
option.)

I think this is a great idea from a user experience perspective.

On Thu, Feb 5, 2015 at 3:04 PM, Amaury Forgeot d'Arc <amauryfa at gmail.com>
wrote:

> 2015-02-05 23:02 GMT+01:00 Chris Barker <chris.barker at noaa.gov>:
>
>> Interesting -- somehow this message came through on the google group
>> mirror of  python-ideas only -- so replying to it barfed.
>>
>> He's my reply again, with the proper address this time:
>>
>> On Wed, Feb 4, 2015 at 1:14 PM, Neil Girdhar <mistersheik at gmail.com>
>>  wrote:
>>
>>> Hundreds of people want an orderedset (
>>> http://stackoverflow.com/questions/1653970/does-python-have-an-ordered-set)
>>> and yet the ordered sets that are in PyPI (oset, ordered-set) are
>>> incomplete.  Please consider adding an ordered set that has *all* of the
>>> same functionality as set.
>>>
>>
>>
>>> (If anyone wants to add a complete ordered set to PyPI that would also
>>> be very useful for me!)
>>>
>>
>> It seems like a good way to go here is to contribute to one of the
>> projects already on PyPI -- and once you like it, propose that it be added
>> to the standard library.
>>
>> And can you leverage the OrderedDict implementation already in the std
>> lib? Or maybe better, the new one in PyPy?
>>
>
> PyPy dicts and sets (the built-in ones) are already ordered by default.
>
> --
> Amaury Forgeot d'Arc
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
- Jim

jim.baker@{colorado.edu|python.org|rackspace.com|zyasoft.com}
twitter.com/jimbaker
github.com/jimbaker
bitbucket.com/jimbaker
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150207/315e96ce/attachment.html>

From abarnert at yahoo.com  Sun Feb  8 08:31:00 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Sat, 7 Feb 2015 23:31:00 -0800
Subject: [Python-ideas] Add orderedset as set(iterable, *,
	ordered=False) and similarly for frozenset.
In-Reply-To: <CAOhO=aOazZf_9EK3p3FQ+wOxVUcEvy3nJVEpXb9=752A_010Dw@mail.gmail.com>
References: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>
 <CALGmxEKJje9qLiN+jP-iK0c-G-Q3n-rRGCvdqriDqVX-rEWcgw@mail.gmail.com>
 <CAGmFidZsd9JfQb3W9cRw-cob4pWT8y-HBMu_z0zwknxEQsCwDw@mail.gmail.com>
 <CAOhO=aOazZf_9EK3p3FQ+wOxVUcEvy3nJVEpXb9=752A_010Dw@mail.gmail.com>
Message-ID: <E51FEFF1-CB1A-4AEE-A3C6-55933F82B3B9@yahoo.com>

On Feb 7, 2015, at 20:02, Jim Baker <jim.baker at python.org> wrote:

> Interesting. It's more than a couple of lines of code to change for Jython (I tried), but now that we require Java 7, we should be able to implement identical ordered behavior to PyPy for dict, __dict__, set, and frozenset using ConcurrentSkipListMap instead of our current usage of ConcurrentHashMap.

How? A skip list is a sorted collection--kept in the natural order of the keys, not the insertion order. Of course you can transform each key into a (next_index(), key) pair at insert time to keep insertion order instead of natural sorting order, but then you lose the logarithmic lookup time when searching just by key.

And even if I'm missing something stupid and there is a way to make this work, you're still only going to get logarithmic performance, not linear. And you're probably going to change the requirement on dict keys from hashable to fully ordered (which, on top of being different, is substantially harder to detect at runtime, and also doesn't map cleanly to immutable among Python's standard data types). So, I don't think it would be a good idea even if it were possible.

(There's a reason that Java, as well as C++ and other languages, provides sets and maps based on both hash tables and sorted logarithmic structures--because they're not interchangeable.)

> (Other collections like defaultdict and weakref collections use Google Guava's MapMaker, which doesn't have an ordered option.)
> 
> I think this is a great idea from a user experience perspective.
> 
> On Thu, Feb 5, 2015 at 3:04 PM, Amaury Forgeot d'Arc <amauryfa at gmail.com> wrote:
>> 2015-02-05 23:02 GMT+01:00 Chris Barker <chris.barker at noaa.gov>:
>>> Interesting -- somehow this message came through on the google group mirror of  python-ideas only -- so replying to it barfed.
>>> 
>>> He's my reply again, with the proper address this time:
>>> 
>>> On Wed, Feb 4, 2015 at 1:14 PM, Neil Girdhar <mistersheik at gmail.com> wrote:
>>>> Hundreds of people want an orderedset (http://stackoverflow.com/questions/1653970/does-python-have-an-ordered-set) and yet the ordered sets that are in PyPI (oset, ordered-set) are incomplete.  Please consider adding an ordered set that has *all* of the same functionality as set.
>>>  
>>>> (If anyone wants to add a complete ordered set to PyPI that would also be very useful for me!)
>>> 
>>> It seems like a good way to go here is to contribute to one of the projects already on PyPI -- and once you like it, propose that it be added to the standard library.
>>> 
>>> And can you leverage the OrderedDict implementation already in the std lib? Or maybe better, the new one in PyPy?
>> 
>> PyPy dicts and sets (the built-in ones) are already ordered by default. 
>> 
>> -- 
>> Amaury Forgeot d'Arc
>> 
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
> 
> 
> 
> -- 
> - Jim
> 
> jim.baker@{colorado.edu|python.org|rackspace.com|zyasoft.com}
> twitter.com/jimbaker
> github.com/jimbaker
> bitbucket.com/jimbaker
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150207/b57ea28e/attachment.html>

From sturla.molden at gmail.com  Sun Feb  8 16:56:23 2015
From: sturla.molden at gmail.com (Sturla Molden)
Date: Sun, 8 Feb 2015 15:56:23 +0000 (UTC)
Subject: [Python-ideas] Add orderedset as set(iterable, *,
	ordered=False) and similarly for frozenset.
References: <D0474C4F-F088-4DFE-B5F3-5EABC0B430B0@gmail.com>
Message-ID: <1620663918445102898.488587sturla.molden-gmail.com@news.gmane.org>

Raymond Hettinger
<raymond.hettinger at gmail.com> wrote:

> I thought ordered sets were interesting enough to write an implementation
> six years ago, but frankly there are many other other things that have a
> much better case for being added to collections (for example, some sort
> of graph structure, some kind of binary tree, some kind of trie, or
> perhaps a bloom filter).

We could use a SortedDict and a SortedSet implemented as binary search
trees. They should have the same interface as set and dict, but also allow
slicing and query method for extracting ranges of keys.

Personally I doubt the usefulness of OrderedDict and OrderedSet.

Sturla


From stephen at xemacs.org  Sun Feb  8 17:05:42 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Mon, 09 Feb 2015 01:05:42 +0900
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
In-Reply-To: <CAPTjJmp6n7mbMx6opciyhgsMMDn3TLMedRofR8yR57aZDMUSqQ@mail.gmail.com>
References: <CABmzr0idfHk1Ae6X1FT3pUpXN3Y+-128ea2JE8mJ34Wdc_E3ag@mail.gmail.com>
 <CAPTjJmppB=JY02YA41J1+4DpxuG6wf70Mb4QJg1z5-rGxsAkAA@mail.gmail.com>
 <CABmzr0ge7dTjDrbAdp0SF=g6Kg=7pStUkR9+aXFRY_m9kOOEzA@mail.gmail.com>
 <CAA68w_=0tsDXhqN6sjf4RMjzABqjC4zytO4agrY3Kc25y0qXpg@mail.gmail.com>
 <CABmzr0jWSSk0n8VYsZzTG57x0osKs70RSJJ+jBGUh2Y6ZP-3Hw@mail.gmail.com>
 <CAA68w_mGex4wBic44y9HobSbTjVouRxPd4diVY-YH2PWuO+1UA@mail.gmail.com>
 <CAPTjJmp6n7mbMx6opciyhgsMMDn3TLMedRofR8yR57aZDMUSqQ@mail.gmail.com>
Message-ID: <87386gpew9.fsf@uwakimon.sk.tsukuba.ac.jp>

Chris Angelico writes:

 > ISTM ordered set addition/union should be like list concatenation with
 > duplicates dropped.

You can't have that, because you lose the order of common elements
that are ordered differently in some of the operands.  I have no idea
why you would want a more or less "symmetric" combination of ordered
sets to ignore the order of elements in any of the operands.  (I write
"symmetric" because I expect the collections in addition or union,
especially union, to be treated "similarly" even if the operations
aren't strictly commutative.)

I could see union defined as "forget the orders and return the union
as sets" but that would surely surprise many users of ordered sets.

Set addition has a mathematical definition, as the "disjoint union"
(coproduct, sort of a multiset that remembers that duplicates came
from different places).  "Practicality before purity" suggests that
most Python programmers won't know what a disjoint union is (and
probably have never heard of "coproduct"), so I'm only -0.5 on using
the "+" operator for this operation, but I still don't see a
"canonical" use case.

I don't have a problem with defining update any way that a reasonably
large plurality of use cases find useful (Andrew's "unsorted but
ordered union" seems reasonable), but it should be called "update" or
"merge" (by analogy with dictionaries or sequences, respectively).

... but don't ask me to come up with the canonical use case, because I
can't think of any at all.


 


From sturla.molden at gmail.com  Sun Feb  8 17:05:49 2015
From: sturla.molden at gmail.com (Sturla Molden)
Date: Sun, 8 Feb 2015 16:05:49 +0000 (UTC)
Subject: [Python-ideas] Add orderedset as set(iterable, *,
	ordered=False) and similarly for frozenset.
References: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>
 <CALGmxEKJje9qLiN+jP-iK0c-G-Q3n-rRGCvdqriDqVX-rEWcgw@mail.gmail.com>
 <CAGmFidZsd9JfQb3W9cRw-cob4pWT8y-HBMu_z0zwknxEQsCwDw@mail.gmail.com>
 <CAOhO=aOazZf_9EK3p3FQ+wOxVUcEvy3nJVEpXb9=752A_010Dw@mail.gmail.com>
 <E51FEFF1-CB1A-4AEE-A3C6-55933F82B3B9@yahoo.com>
Message-ID: <510619570445103971.682709sturla.molden-gmail.com@news.gmane.org>

Andrew Barnert <abarnert at yahoo.com.dmarc.invalid>
wrote:
 
> (There's a reason that Java, as well as C++ and other languages, provides
> sets and maps based on both hash tables and sorted logarithmic
> structures--because they're not interchangeable.)

Yes. And Python unfortunately only provides them as hash tables.

Sturla


From jim.baker at python.org  Sun Feb  8 17:30:37 2015
From: jim.baker at python.org (Jim Baker)
Date: Sun, 8 Feb 2015 09:30:37 -0700
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
In-Reply-To: <E51FEFF1-CB1A-4AEE-A3C6-55933F82B3B9@yahoo.com>
References: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>
 <CALGmxEKJje9qLiN+jP-iK0c-G-Q3n-rRGCvdqriDqVX-rEWcgw@mail.gmail.com>
 <CAGmFidZsd9JfQb3W9cRw-cob4pWT8y-HBMu_z0zwknxEQsCwDw@mail.gmail.com>
 <CAOhO=aOazZf_9EK3p3FQ+wOxVUcEvy3nJVEpXb9=752A_010Dw@mail.gmail.com>
 <E51FEFF1-CB1A-4AEE-A3C6-55933F82B3B9@yahoo.com>
Message-ID: <CAOhO=aOn3eTQVy9OEmspZjrM_SQrJGGPjJ=q-qbJDV60R6f15A@mail.gmail.com>

Andrew,

You're absolutely right - the semantics matter, as they always do. There
are significant choices in the Java ecosystem because of this. If we are
preserving insertion order, vs a comparator based ordering, skip lists and
the specific ConcurrentSkipListMap implementation will not work.

As often the case, it's good to try things out. I looked at PyPy 2.5.0 this
morning, and both set and dict are insertion order preserving.
Interestingly, when creating a set:

>>> x = {1,2,3,4,5,6,7}

the resulting set is the reverse of its input, which I find surprising:

>>>> x
set([7, 6, 5, 4, 3, 2, 1])

Note that this is not the case if initializing from a sequence:

>>>> set([1,2,3,4,5,6,7])
set([1, 2, 3, 4, 5, 6, 7])

Using LinkedHashSet from Jython also does what is expected:

>>> from java.util import LinkedHashSet as LHS
>>> LHS([1,2,3,4,5,6,7])
[1, 2, 3, 4, 5, 6, 7]

(Iterators, etc, are consistent with the representations that are printed
in these cases from PyPy and Jython.)

But there's one more important thing we have to consider. Jython needs
ConcurrentMap semantics, so that we can support our interpretation of the
Python memory model (sequential consistency, plus weakly consistent
iteration is also depended on, see also
http://www.jython.org/jythonbook/en/1.0/Concurrency.html#python-memory-model).
There's some interesting work out there to support a
ConcurrentLinkedHashMap implementation (
https://code.google.com/p/concurrentlinkedhashmap/), the design of which
was used as the basis of the design of the LRU cache in Google Guava, but
it would require some further investigation to see how this would impact
Jython.

Trying out an alternative implementation in Jython would still be on the
order of a few lines of code changes, however :), which gives us a nice low
cost for such experiments.

- Jim

On Sun, Feb 8, 2015 at 12:31 AM, Andrew Barnert <
abarnert at yahoo.com.dmarc.invalid> wrote:

> On Feb 7, 2015, at 20:02, Jim Baker <jim.baker at python.org> wrote:
>
> Interesting. It's more than a couple of lines of code to change for Jython
> (I tried), but now that we require Java 7, we should be able to implement
> identical ordered behavior to PyPy for dict, __dict__, set, and frozenset
> using ConcurrentSkipListMap instead of our current usage of
> ConcurrentHashMap.
>
>
> How? A skip list is a sorted collection--kept in the natural order of the
> keys, not the insertion order. Of course you can transform each key into a
> (next_index(), key) pair at insert time to keep insertion order instead of
> natural sorting order, but then you lose the logarithmic lookup time when
> searching just by key.
>
> And even if I'm missing something stupid and there is a way to make this
> work, you're still only going to get logarithmic performance, not
> linear. And you're probably going to change the requirement on dict keys
> from hashable to fully ordered (which, on top of being different, is
> substantially harder to detect at runtime, and also doesn't map cleanly to
> immutable among Python's standard data types). So, I don't think it would
> be a good idea even if it were possible.
>
> (There's a reason that Java, as well as C++ and other languages, provides
> sets and maps based on both hash tables and sorted logarithmic
> structures--because they're not interchangeable.)
>
> (Other collections like defaultdict and weakref collections use Google
> Guava's MapMaker, which doesn't have an ordered option.)
>
> I think this is a great idea from a user experience perspective.
>
> On Thu, Feb 5, 2015 at 3:04 PM, Amaury Forgeot d'Arc <amauryfa at gmail.com>
> wrote:
>
>> 2015-02-05 23:02 GMT+01:00 Chris Barker <chris.barker at noaa.gov>:
>>
>>> Interesting -- somehow this message came through on the google group
>>> mirror of  python-ideas only -- so replying to it barfed.
>>>
>>> He's my reply again, with the proper address this time:
>>>
>>> On Wed, Feb 4, 2015 at 1:14 PM, Neil Girdhar <mistersheik at gmail.com>
>>>  wrote:
>>>
>>>> Hundreds of people want an orderedset (
>>>> http://stackoverflow.com/questions/1653970/does-python-have-an-ordered-set)
>>>> and yet the ordered sets that are in PyPI (oset, ordered-set) are
>>>> incomplete.  Please consider adding an ordered set that has *all* of the
>>>> same functionality as set.
>>>>
>>>
>>>
>>>> (If anyone wants to add a complete ordered set to PyPI that would also
>>>> be very useful for me!)
>>>>
>>>
>>> It seems like a good way to go here is to contribute to one of the
>>> projects already on PyPI -- and once you like it, propose that it be added
>>> to the standard library.
>>>
>>> And can you leverage the OrderedDict implementation already in the std
>>> lib? Or maybe better, the new one in PyPy?
>>>
>>
>> PyPy dicts and sets (the built-in ones) are already ordered by default.
>>
>> --
>> Amaury Forgeot d'Arc
>>
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
>
>
> --
> - Jim
>
> jim.baker@{colorado.edu|python.org|rackspace.com|zyasoft.com}
> twitter.com/jimbaker
> github.com/jimbaker
> bitbucket.com/jimbaker
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
>


-- 
- Jim

jim.baker@{colorado.edu|python.org|rackspace.com|zyasoft.com}
twitter.com/jimbaker
github.com/jimbaker
bitbucket.com/jimbaker
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150208/76dc1030/attachment-0001.html>

From mistersheik at gmail.com  Sun Feb  8 17:41:16 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Sun, 8 Feb 2015 11:41:16 -0500
Subject: [Python-ideas] Add orderedset as set(iterable, *,
 ordered=False) and similarly for frozenset.
In-Reply-To: <D0474C4F-F088-4DFE-B5F3-5EABC0B430B0@gmail.com>
References: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>
 <B01C16BD-FF3F-4668-9A45-ADBC8AF2512C@gmail.com>
 <CAA68w_m5kUGaaxd8n7Gt33gjee92-0XnX2WxKY+UR=BZZ5jEYg@mail.gmail.com>
 <D0474C4F-F088-4DFE-B5F3-5EABC0B430B0@gmail.com>
Message-ID: <CAA68w_n69DTOPsQVE2243PcVZSKJJ=6Pf6RZ8YqwSUUPZ9jCaQ@mail.gmail.com>

Thanks, Raymond.

My only question is when you say:

Perhaps, I wasn't clear.  The PyPy folks have offered to port their work
back into Python.  They're offering a nice combination of faster
performance (especially for iteration), better memory utilization, and as a
by-product retaining insertion order.  If their offer comes to fruition,
then this whole discussion is moot.

Do you mean it will be moot because :

a) OrderedSet will be declared as a synonym for "set"

b) The Python language specification will be changed so that set is always
ordered, or

c) something else?

Thanks

Neil

On Sat, Feb 7, 2015 at 10:21 PM, Raymond Hettinger <
raymond.hettinger at gmail.com> wrote:

>
> > * It isn't entirely clear what the set-to-set operations should do to
> maintain order, whether equality tests should require matching order,
> whether we care about loss of commutativity, and whether we're willing to
> forgo the usual optimizations on set-to-set operations (i.e. the
> intersection algorithm loops over the smaller of the two input sets and
> checks membership in the larger set -- that is fast, but it would lose
> order if the second set were the smaller of the two).
> >
> >> I agree it's not obvious, but I think that with a little debate we
> could come together on a reasonable choice.
>
> "In the face of ambiguity, refuse the temptation to guess."   Some of the
> participants on python-ideas need to develop a much stronger aversion to
> opening a can of worms ;-)
>
>
> >  You could even block the set-to-set operations for now.  The main use
> of OrderedSet just requires add, update, and iteration as your recipe
> illustrates.
>
> Since you don't seem have any specific requirements for the order created
> by set-to-set operations, then just use the existing set operations on
> OrderedDict.keys():
>
>     >>> a = OrderedDict.fromkeys(['raymond', 'rachel', 'matthew'])
>     >>> b = OrderedDict.fromkeys(['matthew', 'calvin', 'rachel'])
>     >>> a.keys() & b.keys()
>     {'rachel', 'matthew'}
>     >>> a.keys() | b.keys()
>     {'rachel', 'calvin', 'matthew', 'raymond'}
>     >>> a.keys() - b.keys()
>     {'raymond'}
>     >>> a.keys() ^ b.keys()
>     {'calvin', 'raymond'}
>     >>> list(a.keys())
>     ['raymond', 'rachel', 'matthew']
>
>
> >
> > * The standard library should provide fast, clean fundamental building
> blocks and defer all of the myriad of variants and derivations to PyPI.
>  We already provide OrderedDict and MutableSet so that rolling your own
> OrderedSet isn't hard (see the recipe below).
> >
> > I think that OrderedSet is a fundamental building block :)
>
> The rule for "fundamental" that I've used for itertools is whether the
> thing you want can easily be built out of what you've already got.   For
> example, you can't make accumulate() out of the other itertools, but you
> can make tabulate(func) from map(func, count()).  Likewise, you can easily
> make an OrderedSet from an OrderedDict and MutableSet, or if you're needs
> are less, just use the existing set operations on dict views as shown above.
>
>
> > * We're likely to soon have an OrderedDict written in C (I believe Eric
> Snow has been working on that for some time), so anything build on top of
> an OrderedDict will inherit all the speed benefits.
> >
> > Sure.  It's also easy to have an OrderedSet written in C (especially
> once OrderedDict is done.)
>
> You're profoundly under-estimating how much work it is to build and
> maintain a separate C implementation.  I've maintained the code for
> collections and sets for over a decade and can assure you that the
> maintenance burden has been enormous.
>
>
> > Also, there is some chance that PyPy folks will contribute their work
> making regular sets and dicts ordered by default; in which case, having a
> separate OrderedSet would have been a waste and the question would be moot.
> >
> > I think that using dicts and sets and assuming that they're ordered
> because you plan on using PyPy is a terrible error.
>
> Perhaps, I wasn't clear.  The PyPy folks have offered to port their work
> back into Python.  They're offering a nice combination of faster
> performance (especially for iteration), better memory utilization, and as a
> by-product retaining insertion order.  If their offer comes to fruition,
> then this whole discussion is moot.
>
> >
> > To be honest, my main issue is my personal need.
>
> The usual solution for personal needs is to have a personal utils module
> (it sounds like you've been using my recipe for over a year).  Decisions
> about what to put in the standard library have far reaching impact and we
> need to consider what is best for most of the users most of the time (and
> to consider our own maintenance burden).
>
> I thought ordered sets were interesting enough to write an implementation
> six years ago, but frankly there are many other other things that have a
> much better case for being added to collections (for example, some sort of
> graph structure, some kind of binary tree, some kind of trie, or perhaps a
> bloom filter).
>
>
> > If there was a complete PyPI recipe, I would be very happy.
>
> Then, that is what we should do.
>
>
> > I don't think I'm the only one, but maybe you're right and I am.
> >
> >
> > * One thing I've used to test the waters is to include a link in the
> documentation to an OrderedSet recipe (it's in the "See also" section at
> the very bottom of the page at
> https://docs.python.org/2/library/collections.html ).  As far as I can
> tell, there has been nearly zero uptake based on this link.  That gives us
> a good indication that there is very little real-world need for an
> OrderedSet.
> >
> > I've been using your recipe for at least a year (thank you).  How are
> you measuring uptake?
>
>
> That is a very difficult question to answer (there is no precise tool for
> saying X people care about thing Y).  The inputs I do have are imprecise
> and incomplete but never-the-less still informative.  When google's code
> search was still alive, there was a direct way of seeing what people used
> in published code, and I frequently searched to see what people were using
> in the python world and how they were using it (for example, that is how I
> was able to determine that str.swapcase() was nearly useless and propose
> that it be removed for Python 3).
>
> Since then, I rely on a number of other sources such as reading the python
> bug tracker every day for the last 15 years to know what people are
> requesting.  Likewise, as frequent contributor to stack overflow you can
> see what people are asking about.  As a person who does python code reviews
> and teaches python courses for a living, I get to see into the code base
> for many companies.  Also, I spend time reading the source for popular
> python tools to see what their requirements are. Another source is
> reviewing what is included in toolsets from Continuum and Enthought to see
> what their client's have asked for.  None of this is precise or
> sufficiently inclusive, but it does give me some basis for finding out what
> people care about.
>
>
> >>
> >> What I suggest is adding the recipe (like the one shown below) directly
> to the docs in this section:
> >>
> https://docs.python.org/2/library/collections.html#ordereddict-examples-and-recipes
> >> There we already have code for a LastUpdatedOrderedDict() and an
> OrderedCounter().
> >>
> >> In the past, I've used these recipes as a way to gauge interest.  When
> the interest was sufficient, it got promoted from a recipe in the docs to a
> full implementation living in a standard library module.  For example,
> itertools.combinations_with_replacement() and itertools.accumulate()
> started out as recipes in the docs.
> >>
> >> In general, there shouldn't be a rush to push more things in the
> standard library.  We get many suggestions in the form, "it might be nice
> to have XXX in the standard library", however, in many cases there has been
> so little uptake in real-world code that it is isn't worth the maintenance
> burden or for the burden on the working programmer for having too many
> choices in the standard library (I encounter very few people who actually
> know most of what is available in the standard library).
> >
> > I agree.
>
> Sounds good.  I think we can put an OrderedSet() recipe in the docs and
> another out on PyPI, and be done with it.
>
>
> >
> >
> > PyPI is a successful secondary source of code and has become part of the
> working programmer's life.  We're gradually making it easier and easier to
> pull that code on-demand.  IMO, that is where most of the collections
> variants should live. Otherwise, it would be easy to fill the collections
> module with dozens of rarely used dict/set variants, red-black trees,
> pairing heaps, tries, graph implementations, bitarrays, bitsets, persistent
> collections, etc.)
> >
> > my-two-cents-ly,
> >
> >
> >
> > Agreed.
> >
> > It would be nice to complete one of the recipes (oset or ordered-set) ?
> at least for me.
>
> Completing a recipe is more reasonable than pushing for this to be in the
> standard library.
>
>
> Raymond
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150208/7dd6bf73/attachment.html>

From abarnert at yahoo.com  Sun Feb  8 18:51:48 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Sun, 8 Feb 2015 09:51:48 -0800
Subject: [Python-ideas] Add orderedset as set(iterable, *,
	ordered=False) and similarly for frozenset.
In-Reply-To: <CAOhO=aOn3eTQVy9OEmspZjrM_SQrJGGPjJ=q-qbJDV60R6f15A@mail.gmail.com>
References: <cbe9ff04-1f93-45ae-8236-9cea7ca02473@googlegroups.com>
 <CALGmxEKJje9qLiN+jP-iK0c-G-Q3n-rRGCvdqriDqVX-rEWcgw@mail.gmail.com>
 <CAGmFidZsd9JfQb3W9cRw-cob4pWT8y-HBMu_z0zwknxEQsCwDw@mail.gmail.com>
 <CAOhO=aOazZf_9EK3p3FQ+wOxVUcEvy3nJVEpXb9=752A_010Dw@mail.gmail.com>
 <E51FEFF1-CB1A-4AEE-A3C6-55933F82B3B9@yahoo.com>
 <CAOhO=aOn3eTQVy9OEmspZjrM_SQrJGGPjJ=q-qbJDV60R6f15A@mail.gmail.com>
Message-ID: <1076670C-5266-4880-91AD-8FA75C1C6CA7@yahoo.com>

On Feb 8, 2015, at 8:30, Jim Baker <jim.baker at python.org> wrote:

> But there's one more important thing we have to consider. Jython needs ConcurrentMap semantics, so that we can support our interpretation of the Python memory model (sequential consistency, plus weakly consistent iteration is also depended on, see also http://www.jython.org/jythonbook/en/1.0/Concurrency.html#python-memory-model). There's some interesting work out there to support a ConcurrentLinkedHashMap implementation (https://code.google.com/p/concurrentlinkedhashmap/), the design of which was used as the basis of the design of the LRU cache in Google Guava, but it would require some further investigation to see how this would impact Jython.

Correct me if I'm wrong, but doesn't a synchronized mapping provide the required semantics? Obviously in some cases the nonblocking implementation is much more scalable (and better parallelization is one of the reasons people sometimes use Jython), and in a few cases that actually matters, but I'm pretty sure that most people who want OrderedSet and OrderedDict will be fine with a synchronized implementation (which will still parallelize better than CPython and PyPy, just not as well as Jython's regular set and dict), and most people who really need a nonblocking ordered set collection but aren't picky enough to want a specific implementation don't exist.

So, if the other implementations do add an OrderedSet in 3.5, and Jython has a 3.5 coming soon, and nonblocking linked hash containers are still in the research stage, can't you just use a synchronized linked hash container for now?

Also, what does Jython do for collections.OrderedDict today? Surely if that's good enough, you can do the exact same thing for OrderedSet, and if it's not good enough, you can put a good OrderedSet on the same backlog as a good OrderedDict.

> Trying out an alternative implementation in Jython would still be on the order of a few lines of code changes, however :), which gives us a nice low cost for such experiments.
> 
> - Jim
> 
> On Sun, Feb 8, 2015 at 12:31 AM, Andrew Barnert <abarnert at yahoo.com.dmarc.invalid> wrote:
>> On Feb 7, 2015, at 20:02, Jim Baker <jim.baker at python.org> wrote:
>> 
>>> Interesting. It's more than a couple of lines of code to change for Jython (I tried), but now that we require Java 7, we should be able to implement identical ordered behavior to PyPy for dict, __dict__, set, and frozenset using ConcurrentSkipListMap instead of our current usage of ConcurrentHashMap.
>> 
>> How? A skip list is a sorted collection--kept in the natural order of the keys, not the insertion order. Of course you can transform each key into a (next_index(), key) pair at insert time to keep insertion order instead of natural sorting order, but then you lose the logarithmic lookup time when searching just by key.
>> 
>> And even if I'm missing something stupid and there is a way to make this work, you're still only going to get logarithmic performance, not linear. And you're probably going to change the requirement on dict keys from hashable to fully ordered (which, on top of being different, is substantially harder to detect at runtime, and also doesn't map cleanly to immutable among Python's standard data types). So, I don't think it would be a good idea even if it were possible.
>> 
>> (There's a reason that Java, as well as C++ and other languages, provides sets and maps based on both hash tables and sorted logarithmic structures--because they're not interchangeable.)
>> 
>>> (Other collections like defaultdict and weakref collections use Google Guava's MapMaker, which doesn't have an ordered option.)
>>> 
>>> I think this is a great idea from a user experience perspective.
>>> 
>>> On Thu, Feb 5, 2015 at 3:04 PM, Amaury Forgeot d'Arc <amauryfa at gmail.com> wrote:
>>>> 2015-02-05 23:02 GMT+01:00 Chris Barker <chris.barker at noaa.gov>:
>>>>> Interesting -- somehow this message came through on the google group mirror of  python-ideas only -- so replying to it barfed.
>>>>> 
>>>>> He's my reply again, with the proper address this time:
>>>>> 
>>>>> On Wed, Feb 4, 2015 at 1:14 PM, Neil Girdhar <mistersheik at gmail.com> wrote:
>>>>>> Hundreds of people want an orderedset (http://stackoverflow.com/questions/1653970/does-python-have-an-ordered-set) and yet the ordered sets that are in PyPI (oset, ordered-set) are incomplete.  Please consider adding an ordered set that has *all* of the same functionality as set.
>>>>>  
>>>>>> (If anyone wants to add a complete ordered set to PyPI that would also be very useful for me!)
>>>>> 
>>>>> It seems like a good way to go here is to contribute to one of the projects already on PyPI -- and once you like it, propose that it be added to the standard library.
>>>>> 
>>>>> And can you leverage the OrderedDict implementation already in the std lib? Or maybe better, the new one in PyPy?
>>>> 
>>>> PyPy dicts and sets (the built-in ones) are already ordered by default. 
>>>> 
>>>> -- 
>>>> Amaury Forgeot d'Arc
>>>> 
>>>> _______________________________________________
>>>> Python-ideas mailing list
>>>> Python-ideas at python.org
>>>> https://mail.python.org/mailman/listinfo/python-ideas
>>>> Code of Conduct: http://python.org/psf/codeofconduct/
>>> 
>>> 
>>> 
>>> -- 
>>> - Jim
>>> 
>>> jim.baker@{colorado.edu|python.org|rackspace.com|zyasoft.com}
>>> twitter.com/jimbaker
>>> github.com/jimbaker
>>> bitbucket.com/jimbaker
>>> _______________________________________________
>>> Python-ideas mailing list
>>> Python-ideas at python.org
>>> https://mail.python.org/mailman/listinfo/python-ideas
>>> Code of Conduct: http://python.org/psf/codeofconduct/
> 
> 
> 
> -- 
> - Jim
> 
> jim.baker@{colorado.edu|python.org|rackspace.com|zyasoft.com}
> twitter.com/jimbaker
> github.com/jimbaker
> bitbucket.com/jimbaker
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150208/b10346f9/attachment-0001.html>

From victor.stinner at gmail.com  Tue Feb 10 14:42:21 2015
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 10 Feb 2015 14:42:21 +0100
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <54D27F33.3050204@canterbury.ac.nz>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
 <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
 <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
 <54D2034B.2000205@canterbury.ac.nz>
 <CACac1F_3TpHnbcFaf6TLHtpN1AOYaHvfdUPKEVJuhY+7wpEHhw@mail.gmail.com>
 <54D27F33.3050204@canterbury.ac.nz>
Message-ID: <CAMpsgwa3yoe2+EhnDidF3zFWgShMErkmt8ECEiYh_76G-KYvnw@mail.gmail.com>

Hi,

2015-02-04 21:21 GMT+01:00 Greg Ewing <greg.ewing at canterbury.ac.nz>:
> Paul Moore wrote:
>>
>> ... found it. You need loop.close() at the end. Maybe the loop object
>> should close itself in the __del__ method, like file objects?

In the latest version of asyncio, there are now destructors on event
loops and transports on Python 3.4+. The destructor closes the event
loop/transport, but also emit a ResourceWarning warning because it's
not safe to rely on destructors. The destructor may be called too
late. For example, closing a transport may need a running event loop.
Subprocess is a good example of complex transport.

> Yeah, this looks like a bug -- I didn't notice anything
> in the docs about it being mandatory to close() a loop
> when you're finished with it, and such a requirement
> seems rather unpythonic.

In the latest section of the asyncio doc, I added recently:
"ResourceWarning warnings are emitted when transports and event loops
are not closed explicitly."
https://docs.python.org/dev/library/asyncio-dev.html#develop-with-asyncio

I also added a section dedicated to closing event loops and transports:
https://docs.python.org/dev/library/asyncio-dev.html#close-transports-and-event-loops

Maybe I should repeat the information somewhere else in the documentation?

Victor

From victor.stinner at gmail.com  Tue Feb 10 14:53:25 2015
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 10 Feb 2015 14:53:25 +0100
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
 <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
 <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
Message-ID: <CAMpsgwYAzLbq3KmWTmWHJV1+-csFhjm9Vr4B3KHgT7vqdX-r9g@mail.gmail.com>

Hi,

2015-02-04 10:32 GMT+01:00 Paul Moore <p.f.moore at gmail.com>:
> It works for me, but I get an error on termination:
>
> RuntimeError: <_overlapped.Overlapped object at 0x00000000033C56F0>
> still has pending operation at deallocation, the process may crash

I didn't reproduce your issue. I tried with the latest development
version of asyncio, but also older versions. Maybe we used a different
version of the code.

Anyway, you should enable all debug checks: it will show you different
issues in your code,
https://docs.python.org/dev/library/asyncio-dev.html#debug-mode-of-asyncio

> I'm guessing that there's a missing wait on some object, but I'm not
> sure which (or how to identify what's wrong from the error). I'll keep
> digging, but on the assumption that it works on Unix, there may be a
> subtle portability issue here.

In fact, it was a bug in asyncio: Python issue #23242. Good news: it
was already fixed, 3 weeks ago.
http://bugs.python.org/issue23242

The high-level subprocess API (create_subprocess_exec/shell) doesn't
give access to the transport, so it's responsible to handle it. I
forgot to close explicitly the transport when the process exited. It's
now fixed.
https://hg.python.org/cpython/rev/df493e9c6821

Recently I fixed a lot of similar issues, and as Guido wrote, I also
fixed major IOCP issues (many subtle race conditions). Good news: all
fixes will be part of Python 3.4.3!

Victor

From p.f.moore at gmail.com  Tue Feb 10 14:56:04 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Tue, 10 Feb 2015 13:56:04 +0000
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <CAMpsgwa3yoe2+EhnDidF3zFWgShMErkmt8ECEiYh_76G-KYvnw@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
 <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
 <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
 <54D2034B.2000205@canterbury.ac.nz>
 <CACac1F_3TpHnbcFaf6TLHtpN1AOYaHvfdUPKEVJuhY+7wpEHhw@mail.gmail.com>
 <54D27F33.3050204@canterbury.ac.nz>
 <CAMpsgwa3yoe2+EhnDidF3zFWgShMErkmt8ECEiYh_76G-KYvnw@mail.gmail.com>
Message-ID: <CACac1F8neUCGL7fx6LTVM5zhBQOD=LeymO_ZtDukWVs8jTUiOA@mail.gmail.com>

On 10 February 2015 at 13:42, Victor Stinner <victor.stinner at gmail.com> wrote:
> Maybe I should repeat the information somewhere else in the documentation?

Not many of the code samples seem to include loop.close(). Maybe they
should? For example, if I have a loop.run_forever() call, should it be
enclosed in try: ... finally: loop.close() to ensure the loop is
closed? Typically, I rely on doing what the examples do for this sort
of detail, even if the detailed documentation is present somewhere.

Paul

From p.f.moore at gmail.com  Tue Feb 10 14:56:36 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Tue, 10 Feb 2015 13:56:36 +0000
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <CAMpsgwYAzLbq3KmWTmWHJV1+-csFhjm9Vr4B3KHgT7vqdX-r9g@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
 <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
 <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
 <CAMpsgwYAzLbq3KmWTmWHJV1+-csFhjm9Vr4B3KHgT7vqdX-r9g@mail.gmail.com>
Message-ID: <CACac1F_YY5yeC9ir+EcNr-J4hAMad+hk7GN1azrP5CHmtxoX+w@mail.gmail.com>

On 10 February 2015 at 13:53, Victor Stinner <victor.stinner at gmail.com> wrote:
> Recently I fixed a lot of similar issues, and as Guido wrote, I also
> fixed major IOCP issues (many subtle race conditions). Good news: all
> fixes will be part of Python 3.4.3!

Cool, thanks :-)
Paul

From victor.stinner at gmail.com  Tue Feb 10 15:04:22 2015
From: victor.stinner at gmail.com (Victor Stinner)
Date: Tue, 10 Feb 2015 15:04:22 +0100
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <CACac1F8neUCGL7fx6LTVM5zhBQOD=LeymO_ZtDukWVs8jTUiOA@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
 <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
 <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
 <54D2034B.2000205@canterbury.ac.nz>
 <CACac1F_3TpHnbcFaf6TLHtpN1AOYaHvfdUPKEVJuhY+7wpEHhw@mail.gmail.com>
 <54D27F33.3050204@canterbury.ac.nz>
 <CAMpsgwa3yoe2+EhnDidF3zFWgShMErkmt8ECEiYh_76G-KYvnw@mail.gmail.com>
 <CACac1F8neUCGL7fx6LTVM5zhBQOD=LeymO_ZtDukWVs8jTUiOA@mail.gmail.com>
Message-ID: <CAMpsgwYf9kfT5HcdjMXyiVRUuK1W-6iwimrj00kSPKwsVvnEdg@mail.gmail.com>

2015-02-10 14:56 GMT+01:00 Paul Moore <p.f.moore at gmail.com>:
> On 10 February 2015 at 13:42, Victor Stinner <victor.stinner at gmail.com> wrote:
>> Maybe I should repeat the information somewhere else in the documentation?
>
> Not many of the code samples seem to include loop.close().

Which code samples? I tried to ensure that all code snippets in
asyncio doc and all examples in Tulip repository call loop.close().

> Maybe they should?

Yes.

> For example, if I have a loop.run_forever() call, should it be
> enclosed in try: ... finally: loop.close() to ensure the loop is
> closed?

"loop.run_forever(); loop.close()" should be fine. You may use
try/finally if you don't want to register signal handlers for
SIGINT/SIGTERM (which is not supported on Windows yet...).

Victor

From p.f.moore at gmail.com  Tue Feb 10 15:49:09 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Tue, 10 Feb 2015 14:49:09 +0000
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <CAMpsgwYf9kfT5HcdjMXyiVRUuK1W-6iwimrj00kSPKwsVvnEdg@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
 <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
 <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
 <54D2034B.2000205@canterbury.ac.nz>
 <CACac1F_3TpHnbcFaf6TLHtpN1AOYaHvfdUPKEVJuhY+7wpEHhw@mail.gmail.com>
 <54D27F33.3050204@canterbury.ac.nz>
 <CAMpsgwa3yoe2+EhnDidF3zFWgShMErkmt8ECEiYh_76G-KYvnw@mail.gmail.com>
 <CACac1F8neUCGL7fx6LTVM5zhBQOD=LeymO_ZtDukWVs8jTUiOA@mail.gmail.com>
 <CAMpsgwYf9kfT5HcdjMXyiVRUuK1W-6iwimrj00kSPKwsVvnEdg@mail.gmail.com>
Message-ID: <CACac1F-=uuck9YkzSW2=pA-6DsZfwut_7p-2T1PsDbdF4S5Myw@mail.gmail.com>

On 10 February 2015 at 14:04, Victor Stinner <victor.stinner at gmail.com> wrote:
>>
>> Not many of the code samples seem to include loop.close().
>
> Which code samples? I tried to ensure that all code snippets in
> asyncio doc and all examples in Tulip repository call loop.close().

For example, in
https://docs.python.org/dev/library/asyncio-dev.html#detect-exceptions-never-consumed
the first example doesn't close the loop. Neither of the fixes given
close the loop either.

>> Maybe they should?
>
> Yes.
>
>> For example, if I have a loop.run_forever() call, should it be
>> enclosed in try: ... finally: loop.close() to ensure the loop is
>> closed?
>
> "loop.run_forever(); loop.close()" should be fine. You may use
> try/finally if you don't want to register signal handlers for
> SIGINT/SIGTERM (which is not supported on Windows yet...).

Hmm, so if I write a server and hit Ctrl-C to exit it, what happens?
That's how non-asyncio things like http.server typically let the user
quit.
Paul

From eduardbpy at gmail.com  Tue Feb 10 17:04:47 2015
From: eduardbpy at gmail.com (Eduard Bondarenko)
Date: Tue, 10 Feb 2015 18:04:47 +0200
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
Message-ID: <CAGJTDyus2Nua4U2NANjaoW_gfvHyChR7aDSs_7gZoUP=Mr0ffQ@mail.gmail.com>

Hello group,

my name is Eduard. I came into the Python community with an idea to add
(implicitly or explicitly) 'use warnings' directive into the python's
programs.

I think that everyone who worked with Perl understand what I am talking
about. For the rest I will explain the idea.

Actually, this is my first experience for writing into the community like
this, so excuse me if you found some mistakes or oddities. Also I do not
know whether you are already talk about this topic..in any case - sorry.

So, imagine that you have a program:

#!/usr/bin/python

*word* = raw_input("Enter line : ")

if *word* == "hello":
  print ("You wrote \'hello\'")
else:
  if *world* == "buy": #Error! should be word not world
    print "Buy"
else:
  *iamnotfunction* #Also error

This script contains two errors. And in both cases we will know about it at
runtime. And the most worst thing is that you will not know about these
errors until someone enters anything other than the "hello" word..

Try and except blocks do not solve this problem. Within this approach we
also receive problem at runtime.

What I propose ? I propose to add 'use warnings' directive. This directive
will provide deeply verification. Like this:

#!/usr/bin/python

*use warnings*

*word* = raw_input("Enter line : ")

if *word* == "hello":
  print ("You wrote \'hello\'")
else:
  if *world* == "buy": #Error! should be word not world
    print "Buy"
else:
  *iamnotfunction* #Also error

Output:
Use of uninitialized value world in  eq (==) at test.py line ..
Useless use of a constant (iamnotfunction) in void context at test.py line
..

The user will see the output like this and the program will not start.


To my mind the good idea is to explicitly set this directive. If developer
does not want spend time for this checking, he can omit 'use warning'
directive. Also it will not corrupt the existing programs. And developers
will have a chance to add this line gradually.


Many thanks!

- Eduard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/b2666e90/attachment.html>

From me at the-compiler.org  Tue Feb 10 17:12:41 2015
From: me at the-compiler.org (Florian Bruhin)
Date: Tue, 10 Feb 2015 17:12:41 +0100
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CAGJTDyus2Nua4U2NANjaoW_gfvHyChR7aDSs_7gZoUP=Mr0ffQ@mail.gmail.com>
References: <CAGJTDyus2Nua4U2NANjaoW_gfvHyChR7aDSs_7gZoUP=Mr0ffQ@mail.gmail.com>
Message-ID: <20150210161240.GB24753@tonks>

Hi,

* Eduard Bondarenko <eduardbpy at gmail.com> [2015-02-10 18:04:47 +0200]:
> What I propose ? I propose to add 'use warnings' directive. This directive
> will provide deeply verification. Like this:

It probably doesn't make sense to add something like this to CPython
itself - but there are many third-party tools which do such kind of
checks (and which I can recommend to use!), static analyzers/linters:

https://pypi.python.org/pypi/pyflakes
https://pypi.python.org/pypi/pep8
https://pypi.python.org/pypi/flake8 (a wrapper around pyflakes/pep8)
http://www.pylint.org/

Florian

-- 
http://www.the-compiler.org | me at the-compiler.org (Mail/XMPP)
   GPG: 916E B0C8 FD55 A072 | http://the-compiler.org/pubkey.asc
         I love long mails! | http://email.is-not-s.ms/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/38f544d7/attachment.sig>

From graffatcolmingov at gmail.com  Tue Feb 10 17:26:41 2015
From: graffatcolmingov at gmail.com (Ian Cordasco)
Date: Tue, 10 Feb 2015 10:26:41 -0600
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <20150210161240.GB24753@tonks>
References: <CAGJTDyus2Nua4U2NANjaoW_gfvHyChR7aDSs_7gZoUP=Mr0ffQ@mail.gmail.com>
 <20150210161240.GB24753@tonks>
Message-ID: <CAN-Kwu3hDLBi4zF4MJ0BNKOmy4JeMKfm6CC01XDXnD29SjRO2g@mail.gmail.com>

Hi Eduard,

Welcome to Python!

On Tue, Feb 10, 2015 at 10:12 AM, Florian Bruhin <me at the-compiler.org> wrote:
> Hi,
>
> * Eduard Bondarenko <eduardbpy at gmail.com> [2015-02-10 18:04:47 +0200]:
>> What I propose ? I propose to add 'use warnings' directive. This directive
>> will provide deeply verification. Like this:
>
> It probably doesn't make sense to add something like this to CPython
> itself - but there are many third-party tools which do such kind of
> checks (and which I can recommend to use!), static analyzers/linters:
>
> https://pypi.python.org/pypi/pyflakes
> https://pypi.python.org/pypi/pep8
> https://pypi.python.org/pypi/flake8 (a wrapper around pyflakes/pep8)
> http://www.pylint.org/

So as a maintainer/contributor to three of those I agree. But they're
external dependencies which I sense is something you don't want.

The reality is that you can compile this code with some standard
library modules and they'll give you the feedback you want. If you
were to run this code you'd get a NameErrors for the first line where
you expect a warning and you would get a SyntaxError for the second
else statement after the first. pyflakes and flake8 turn those into
error codes for you. So really, python already comes with the warnings
on because doing

$ python test.py

Would first result in a SyntaxError (because of your else following
the first else) and then once that's been fixed, you'd get a
NameError.

Disclaimer: I didn't actually run the code so I may have the order of
errors wrong, but I think this is roughly correct.

Cheers,
Ian

From geoffspear at gmail.com  Tue Feb 10 17:40:22 2015
From: geoffspear at gmail.com (Geoffrey Spear)
Date: Tue, 10 Feb 2015 11:40:22 -0500
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CAN-Kwu3hDLBi4zF4MJ0BNKOmy4JeMKfm6CC01XDXnD29SjRO2g@mail.gmail.com>
References: <CAGJTDyus2Nua4U2NANjaoW_gfvHyChR7aDSs_7gZoUP=Mr0ffQ@mail.gmail.com>
 <20150210161240.GB24753@tonks>
 <CAN-Kwu3hDLBi4zF4MJ0BNKOmy4JeMKfm6CC01XDXnD29SjRO2g@mail.gmail.com>
Message-ID: <CAGifb9H+sF2Vzn_3c8jUNcLWLReswhBb_4RVKhg=wOapicTYuw@mail.gmail.com>

On Tue, Feb 10, 2015 at 11:26 AM, Ian Cordasco <graffatcolmingov at gmail.com>
wrote:
>
>
> $ python test.py
>
> Would first result in a SyntaxError (because of your else following
> the first else) and then once that's been fixed, you'd get a
> NameError.
>
> Disclaimer: I didn't actually run the code so I may have the order of
> errors wrong, but I think this is roughly correct.
>
>
The syntax error can be corrected with elif, but the OP is correct that the
NameError will only happen when that branch is actually run, not at
compile-time.

Of course, this is probably a better argument for a test suite that covers
all of your branches and doing static analysis than it is for adding
Perl-style pragmas to the language. Perl needed use warnings and use strict
because the language allows some horrible things and didn't want to make
good programming mandatory at the expense of breaking existing code,
instead making it "suggested" to use pragmas that enforce some good habits.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/06c8058f/attachment.html>

From ryan at ryanhiebert.com  Tue Feb 10 17:14:28 2015
From: ryan at ryanhiebert.com (Ryan Hiebert)
Date: Tue, 10 Feb 2015 10:14:28 -0600
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CAGJTDyus2Nua4U2NANjaoW_gfvHyChR7aDSs_7gZoUP=Mr0ffQ@mail.gmail.com>
References: <CAGJTDyus2Nua4U2NANjaoW_gfvHyChR7aDSs_7gZoUP=Mr0ffQ@mail.gmail.com>
Message-ID: <9F4EF062-D00C-4548-A19C-0F7D3CD9CD1C@ryanhiebert.com>

Linters are actually quite good at finding this kind of mistake, and are the accepted way to deal with such checks.

Flake8 is a popular one you may wish to check out:
https://pypi.python.org/pypi/flake8 <https://pypi.python.org/pypi/flake8>

Ryan

> On Feb 10, 2015, at 10:04 AM, Eduard Bondarenko <eduardbpy at gmail.com> wrote:
> 
> Hello group,
> 
> my name is Eduard. I came into the Python community with an idea to add (implicitly or explicitly) 'use warnings' directive into the python's programs.
> 
> I think that everyone who worked with Perl understand what I am talking about. For the rest I will explain the idea.
> 
> Actually, this is my first experience for writing into the community like this, so excuse me if you found some mistakes or oddities. Also I do not know whether you are already talk about this topic..in any case - sorry.
> 
> So, imagine that you have a program:
> 
> #!/usr/bin/python
> 
> word = raw_input("Enter line : ")
> 
> if word == "hello":
>   print ("You wrote \'hello\'")
> else:
>   if world == "buy": #Error! should be word not world
>     print "Buy"
> else:
>   iamnotfunction #Also error
> 
> This script contains two errors. And in both cases we will know about it at runtime. And the most worst thing is that you will not know about these errors until someone enters anything other than the "hello" word..
> 
> Try and except blocks do not solve this problem. Within this approach we also receive problem at runtime.
> 
> What I propose ? I propose to add 'use warnings' directive. This directive will provide deeply verification. Like this:
> 
> #!/usr/bin/python
> 
> use warnings
> 
> word = raw_input("Enter line : ")
> 
> if word == "hello":
>   print ("You wrote \'hello\'")
> else:
>   if world == "buy": #Error! should be word not world
>     print "Buy"
> else:
>   iamnotfunction #Also error
> 
> Output:
> Use of uninitialized value world in  eq (==) at test.py line ..
> Useless use of a constant (iamnotfunction) in void context at test.py line ..
> 
> The user will see the output like this and the program will not start.
> 
> 
> To my mind the good idea is to explicitly set this directive. If developer does not want spend time for this checking, he can omit 'use warning' directive. Also it will not corrupt the existing programs. And developers will have a chance to add this line gradually.
> 
> 
> Many thanks! 
> 
> - Eduard
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/ae945cbe/attachment-0001.html>

From eduardbpy at gmail.com  Tue Feb 10 18:24:47 2015
From: eduardbpy at gmail.com (Eduard Bondarenko)
Date: Tue, 10 Feb 2015 19:24:47 +0200
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
Message-ID: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>

Dear Ian Cordasco,



> So as a maintainer/contributor to three of those I agree. But they're
> external dependencies which I sense is something you don't want.
>
> The reality is that you can compile this code with some standard
> library modules and they'll give you the feedback you want. If you
> were to run this code you'd get a NameErrors for the first line where
> you expect a warning and you would get a SyntaxError for the second
> else statement after the first. pyflakes and flake8 turn those into
> error codes for you. So really, python already comes with the warnings
> on because doing
>
> $ python test.py
>
> Would first result in a SyntaxError (because of your else following
> the first else) and then once that's been fixed, you'd get a
> NameError.
>
> Disclaimer: I didn't actually run the code so I may have the order of
> errors wrong, but I think this is roughly correct.


I am sorry..I mean:

#!/usr/bin/python

word = raw_input("Enter line : ")

if word == "hello":
  print ("You enter \'hello\'")
else:
  if world == "buy": #Error! should be word not world
    print "Buy"
  else:
    dasdas

We will not have a problem with else.

Okay, we can split this example:

The first part is:

#!/usr/bin/python

*word* = raw_input("Enter line : ")

if *word* == "hello":
  print ("You enter \'hello\'")
else:
  if *world* == "buy": *#Error! should be word not world*
    print "Buy"

*"NameError: name 'world' is not defined" - only in case if the word !=
'hello'*!!

The second one:

#!/usr/bin/python

*word* = raw_input("Enter line : ")

if *word* == "hello":
  print ("You enter \'hello\'")
else:
  if *word* == "buy": #Now correct!
    print "Buy"
  else
    iamnotfunction *#Error*

*"NameError: name 'iamnotfunction' is not defined" only in case if word !=
'hello' and word != "buy"*!!

So I can type whatever I want in else block and someone can detect it after
the several years..

And the first thing that I have to do programming in Python is to download
analyser tool..I understand the importance of static analyser tools, but in
this case (to check simple things like a variables and functions names) it
is like "To use a sledge-hammer to crack a nut.".


Many thanks!

- Eduard

PS. Within C++ or other languages I should use static analyser tools to
check real bug-prone situation, but in new, modern Python I should use
third-party tools to check the real simple errata.

Maybe I do not understand something..
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/beff0031/attachment.html>

From thomas at kluyver.me.uk  Tue Feb 10 18:31:05 2015
From: thomas at kluyver.me.uk (Thomas Kluyver)
Date: Tue, 10 Feb 2015 09:31:05 -0800
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
Message-ID: <CAOvn4qjZppYpF=SZ+Ba9Qu3T094OiqWkXb2N194dLi7EeQc87Q@mail.gmail.com>

On 10 February 2015 at 09:24, Eduard Bondarenko <eduardbpy at gmail.com> wrote:

> And the first thing that I have to do programming in Python is to download
> analyser tool.
>

Most IDEs have some static analysis tool built in which will highlight
these kinds of mistakes. E.g. I use Pycharm, and that's more than capable
of pointing out undefined names for me.

Maybe we should consider including functionality like that in IDLE, but
that would require giving one static analysis tool the official blessing of
inclusion in the standard library, which would probably be controversial.
And I'm not sure many people use IDLE anyway.

Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/326c1d53/attachment.html>

From chris.barker at noaa.gov  Tue Feb 10 19:07:26 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Tue, 10 Feb 2015 10:07:26 -0800
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
Message-ID: <CALGmxELjzWmNeUSO+yJ9wQ6OyC5FvMzJTbAs=K0KUTG1TRtRDg@mail.gmail.com>

On Tue, Feb 10, 2015 at 9:24 AM, Eduard Bondarenko <eduardbpy at gmail.com>
wrote:

> So I can type whatever I want in else block and someone can detect it
> after the several years..
>
> And the first thing that I have to do programming in Python is to download
> analyser tool..I understand the importance of static analyser tools, but
> in this case (to check simple things like a variables and functions names)
> it is like "To use a sledge-hammer to crack a nut.".
>

I  think what you are missing is that Python is a dynamic language -- there
are any number of things that are simply not known at run-time. Sure, you
can come up with simple examples that it seems obvious are an error, but
there is no clear place to draw a line.

So the "right" thing to do is use linters and unit tests and test coverage
tools.

_maybe_ there should be some of these in the stdlib, but so far, it's
seemed better to have an active ecosystem of competing systems.

-Chris


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/a0453b09/attachment.html>

From chris.barker at noaa.gov  Tue Feb 10 19:55:36 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Tue, 10 Feb 2015 10:55:36 -0800
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CANc-5UwbueT-rGwjweUKV3Et-s-rFv5-Pb8m2scxc7X7g8xttA@mail.gmail.com>
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
 <CALGmxELjzWmNeUSO+yJ9wQ6OyC5FvMzJTbAs=K0KUTG1TRtRDg@mail.gmail.com>
 <CANc-5UwbueT-rGwjweUKV3Et-s-rFv5-Pb8m2scxc7X7g8xttA@mail.gmail.com>
Message-ID: <CALGmxEJf36BY=ozT3ZqMApBcYSCy_HT+PCrqMigm0UmwbHcrQQ@mail.gmail.com>

On Tue, Feb 10, 2015 at 10:51 AM, Skip Montanaro <skip.montanaro at gmail.com>
wrote:

>
> On Tue, Feb 10, 2015 at 12:07 PM, Chris Barker <chris.barker at noaa.gov>
> wrote:
>
>> ... simply not known *at* run-time.
>
>
> Do you mean "simply not known *until* run-time"?
>

indeed -- the stuff we don't even know at run time is a whole different
matter ;-)

-Chris




> Cheers,
>
> Skip
>
>
>


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/e5433893/attachment-0001.html>

From skip.montanaro at gmail.com  Tue Feb 10 19:51:04 2015
From: skip.montanaro at gmail.com (Skip Montanaro)
Date: Tue, 10 Feb 2015 12:51:04 -0600
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CALGmxELjzWmNeUSO+yJ9wQ6OyC5FvMzJTbAs=K0KUTG1TRtRDg@mail.gmail.com>
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
 <CALGmxELjzWmNeUSO+yJ9wQ6OyC5FvMzJTbAs=K0KUTG1TRtRDg@mail.gmail.com>
Message-ID: <CANc-5UwbueT-rGwjweUKV3Et-s-rFv5-Pb8m2scxc7X7g8xttA@mail.gmail.com>

On Tue, Feb 10, 2015 at 12:07 PM, Chris Barker <chris.barker at noaa.gov>
wrote:

> ... simply not known *at* run-time.


Do you mean "simply not known *until* run-time"?

Cheers,

Skip
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/6127be42/attachment.html>

From eduardbpy at gmail.com  Tue Feb 10 20:53:17 2015
From: eduardbpy at gmail.com (Eduard Bondarenko)
Date: Tue, 10 Feb 2015 21:53:17 +0200
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CALGmxEJf36BY=ozT3ZqMApBcYSCy_HT+PCrqMigm0UmwbHcrQQ@mail.gmail.com>
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
 <CALGmxELjzWmNeUSO+yJ9wQ6OyC5FvMzJTbAs=K0KUTG1TRtRDg@mail.gmail.com>
 <CANc-5UwbueT-rGwjweUKV3Et-s-rFv5-Pb8m2scxc7X7g8xttA@mail.gmail.com>
 <CALGmxEJf36BY=ozT3ZqMApBcYSCy_HT+PCrqMigm0UmwbHcrQQ@mail.gmail.com>
Message-ID: <CAGJTDyt67S2-GUHU6-a=q7d6P2KHzj8AK0ZJSMbT7505K8hXvw@mail.gmail.com>

Well, Perl also dynamic language, but except the usage of the additional
analyser tools he provides 'use warning' directive and many Perl's
developers use this feature.

Why you do not want to add such feature in Python ?

If developer do not want to use this feature and want to use static
analyser tools - okay. But what if someone wants to checks errata (not even
an errors) without the s.a tools ?

To my mind some people simply do not know that even dynamic language can do
some checks..

I am, as a developer which came from Perl programming language really do
not understand how you can agree with this behaviour and why you are really
do not understand me..

Where it is written that dynamic language should not checks such things ?

Seems very strange that Python checks ordinary blocks like 'if', 'else',
'def' statement and others..We could just have a big amount of tools to
check this.

I do not want to create C++ from Python. I just want to correct this
situation, when you can not say to interpreter "please check at least
something!!"

Many thanks!

- Eduard

2015-02-10 20:55 GMT+02:00 Chris Barker <chris.barker at noaa.gov>:

> On Tue, Feb 10, 2015 at 10:51 AM, Skip Montanaro <skip.montanaro at gmail.com
> > wrote:
>
>>
>> On Tue, Feb 10, 2015 at 12:07 PM, Chris Barker <chris.barker at noaa.gov>
>> wrote:
>>
>>> ... simply not known *at* run-time.
>>
>>
>> Do you mean "simply not known *until* run-time"?
>>
>
> indeed -- the stuff we don't even know at run time is a whole different
> matter ;-)
>
> -Chris
>
>
>
>
>> Cheers,
>>
>> Skip
>>
>>
>>
>
>
> --
>
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R            (206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115       (206) 526-6317   main reception
>
> Chris.Barker at noaa.gov
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/aa617db1/attachment.html>

From graffatcolmingov at gmail.com  Tue Feb 10 21:27:17 2015
From: graffatcolmingov at gmail.com (Ian Cordasco)
Date: Tue, 10 Feb 2015 14:27:17 -0600
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CAGJTDyt67S2-GUHU6-a=q7d6P2KHzj8AK0ZJSMbT7505K8hXvw@mail.gmail.com>
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
 <CALGmxELjzWmNeUSO+yJ9wQ6OyC5FvMzJTbAs=K0KUTG1TRtRDg@mail.gmail.com>
 <CANc-5UwbueT-rGwjweUKV3Et-s-rFv5-Pb8m2scxc7X7g8xttA@mail.gmail.com>
 <CALGmxEJf36BY=ozT3ZqMApBcYSCy_HT+PCrqMigm0UmwbHcrQQ@mail.gmail.com>
 <CAGJTDyt67S2-GUHU6-a=q7d6P2KHzj8AK0ZJSMbT7505K8hXvw@mail.gmail.com>
Message-ID: <CAN-Kwu23YbY-LhdUpTpxvm4CfM4KstQEw+v5UENC2MbGvL-zjQ@mail.gmail.com>

On Tue, Feb 10, 2015 at 1:53 PM, Eduard Bondarenko <eduardbpy at gmail.com> wrote:
> Well, Perl also dynamic language, but except the usage of the additional
> analyser tools he provides 'use warning' directive and many Perl's
> developers use this feature.

First of all, Python is not a "he". Python is an "it".

> Why you do not want to add such feature in Python ?
>
> If developer do not want to use this feature and want to use static analyser
> tools - okay. But what if someone wants to checks errata (not even an
> errors) without the s.a tools ?

You don't seem to understand that this functionality already exists.
Perl added it in the beginning because dependencies were the
antithesis of the Perl development ideology. Dependencies and tooling
are encouraged and accepted in the Python community meaning that the
interpreter itself doesn't need to be responsible for every possible
function a developer could possibly ever desire.

> To my mind some people simply do not know that even dynamic language can do
> some checks..

Except we use a dynamic language to build the checks that we have...
so we clearly already know this.

> I am, as a developer which came from Perl programming language really do not
> understand how you can agree with this behaviour and why you are really do
> not understand me..

We understand you. Some of us have come from the Perl community as
well. It doesn't mean we don't appreciate "use warnings;" when writing
Perl, it means we as a community understand that this doesn't belong
in the Python interpreter.

> Where it is written that dynamic language should not checks such things ?

It isn't. But perl doesn't do it by default either. It's "good style"
for a developer to turn that on, but it isn't required and perl (as a
dynamic language) clearly things it shouldn't be checking those things
without a user asking it too. That same philosophy is at work here.
Python doesn't check these things unless you use tools to check them
for you. "use warnings" is a tool, it just isn't a tool you install
separately.

> Seems very strange that Python checks ordinary blocks like 'if', 'else',
> 'def' statement and others..We could just have a big amount of tools to
> check this.

So now you're arguing for Python to stop checking the Syntax of the
program for errors? I'm confused.

> I do not want to create C++ from Python. I just want to correct this
> situation, when you can not say to interpreter "please check at least
> something!!"

It does check something. It checks that you use valid syntax. It
checks that it can import the code you use in your code. Depending on
what else happens, it checks numerous other things too. So it's doing
exactly what you want, it's checking more than one thing, and you
asked for at least one, so your request is satisfied.

> Many thanks!
>
> - Eduard
>
>
> 2015-02-10 20:55 GMT+02:00 Chris Barker <chris.barker at noaa.gov>:
>>
>> On Tue, Feb 10, 2015 at 10:51 AM, Skip Montanaro
>> <skip.montanaro at gmail.com> wrote:
>>>
>>>
>>> On Tue, Feb 10, 2015 at 12:07 PM, Chris Barker <chris.barker at noaa.gov>
>>> wrote:
>>>>
>>>> ... simply not known at run-time.
>>>
>>>
>>> Do you mean "simply not known until run-time"?
>>
>>
>> indeed -- the stuff we don't even know at run time is a whole different
>> matter ;-)
>>
>> -Chris
>>
>>
>>
>>>
>>> Cheers,
>>>
>>> Skip
>>>
>>>
>>
>>
>>
>> --
>>
>> Christopher Barker, Ph.D.
>> Oceanographer
>>
>> Emergency Response Division
>> NOAA/NOS/OR&R            (206) 526-6959   voice
>> 7600 Sand Point Way NE   (206) 526-6329   fax
>> Seattle, WA  98115       (206) 526-6317   main reception
>>
>> Chris.Barker at noaa.gov
>
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From ethan at stoneleaf.us  Tue Feb 10 21:35:54 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Tue, 10 Feb 2015 12:35:54 -0800
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CAGJTDyt67S2-GUHU6-a=q7d6P2KHzj8AK0ZJSMbT7505K8hXvw@mail.gmail.com>
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
 <CALGmxELjzWmNeUSO+yJ9wQ6OyC5FvMzJTbAs=K0KUTG1TRtRDg@mail.gmail.com>
 <CANc-5UwbueT-rGwjweUKV3Et-s-rFv5-Pb8m2scxc7X7g8xttA@mail.gmail.com>
 <CALGmxEJf36BY=ozT3ZqMApBcYSCy_HT+PCrqMigm0UmwbHcrQQ@mail.gmail.com>
 <CAGJTDyt67S2-GUHU6-a=q7d6P2KHzj8AK0ZJSMbT7505K8hXvw@mail.gmail.com>
Message-ID: <54DA6BAA.9080102@stoneleaf.us>

On 02/10/2015 11:53 AM, Eduard Bondarenko wrote:
>
> Well, Perl also dynamic language, but except the usage of the additional analyser tools he provides 'use warning'
> directive and many Perl's developers use this feature.

As has been mentioned, part of the reason for 'use warning' in Perl is because without it, Perl will keep running but
with wrong data; in Python, you get a crash, which (more) clearly shows the source of the problem.

> Why you do not want to add such feature in Python ?

Because it is not necessary.  Writing correct programs involves a lot of steps, which includes proper unit tests.  Those
tests should uncover such things as unused or misspelled names; many use linters, which can also do that.

> If developer do not want to use this feature and want to use static analyser tools - okay. But what if someone wants to
> checks errata (not even an errors) without the s.a tools ?

What do you mean by errata?  Attempting to use an undefined name is definitely an error.

> To my mind some people simply do not know that even dynamic language can do some checks..

That's part of learning the new language.

> I am, as a developer which came from Perl programming language really do not understand how you can agree with this
> behaviour and why you are really do not understand me.

Just because we do not agree with you does not mean we do not understand you.

> Where it is written that dynamic language should not checks such things ?

The language doesn't check anything -- tools do; sometimes, as in Perl, those tools are built in to the primary
executable; sometimes, as in Python, they are separate.

> Seems very strange that Python checks ordinary blocks like 'if', 'else', 'def' statement and others..We could just have
> a big amount of tools to check this.

Python checks what it needs to.  It doesn't need to check that a name is valid until it tries to use that name.  It is
entirely possible that a name will not exist unless certain things happen; a correct program will not attempt to use a
name that doesn't yet exist; Python cannot tell before executing the program whether that precondition has happened.

> I do not want to create C++ from Python. I just want to correct this situation, when you can not say to interpreter
> "please check at least something!!"

The situation is not wrong.  Listen, learn, and grow.

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/5ab427ef/attachment-0001.sig>

From spencerb21 at live.com  Tue Feb 10 22:01:12 2015
From: spencerb21 at live.com (Spencer Brown)
Date: Wed, 11 Feb 2015 07:01:12 +1000
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
Message-ID: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>

Basically, in any location where a variable name is provided for assignment (tuple unpacking, for loops, function parameters, etc) a variable name can be replaced by '...' to indicate that that value will not be used. This would be syntactically equivalent to del-ing the variable immediately after, except that the assignment never needs to be done.

Examples:
>>> tup =  ('val1', 'val2', (123, 456, 789))
>>> name, ..., (..., value, ...) = tup
instead of:
>>> name = tup[0]
>>> value = tup[2][1]
This shows slightly more clearly what format the tuple will be in, and provides some additional error-checking - the unpack will fail if the tuple is a different size or format.

>>> for ... in range(100):
>>>    pass
This particular case could be optimised by checking to see if the iterator has a len() method, and if so directly looping that many times in C instead of executing the __next__() method repeatedly.

>>> for i, ... in enumerate(object):
is equivalent to:
>>> for i in range(len(object)): 
The Ellipse form is slightly more clear and works for objects without len(). 

>>> first, second, *... = iterator
In the normal case, this would extract the first two values, and exhaust the rest of the iterator. Perhaps this could be special-cased so the iterator is left alone, and the 3rd+ values are still available.

>>> def function(param1, ..., param3, key1=True, key2=False, **...):
>>>     pass
This could be useful when overriding functions in a subclass, or writing callback functions. Here we don't actually need the 2nd argument. The '**...' would be the only allowable form of Ellipse in keyword arguments, and just allows any other keywords to be given to the function.

- Spencer

From chris.barker at noaa.gov  Tue Feb 10 22:12:25 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Tue, 10 Feb 2015 13:12:25 -0800
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CAGJTDyt67S2-GUHU6-a=q7d6P2KHzj8AK0ZJSMbT7505K8hXvw@mail.gmail.com>
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
 <CALGmxELjzWmNeUSO+yJ9wQ6OyC5FvMzJTbAs=K0KUTG1TRtRDg@mail.gmail.com>
 <CANc-5UwbueT-rGwjweUKV3Et-s-rFv5-Pb8m2scxc7X7g8xttA@mail.gmail.com>
 <CALGmxEJf36BY=ozT3ZqMApBcYSCy_HT+PCrqMigm0UmwbHcrQQ@mail.gmail.com>
 <CAGJTDyt67S2-GUHU6-a=q7d6P2KHzj8AK0ZJSMbT7505K8hXvw@mail.gmail.com>
Message-ID: <CALGmxE+yTWLD2CfJR4iUiqTSJi=2d98VKpb1zghn_adrxo_aZg@mail.gmail.com>

On Tue, Feb 10, 2015 at 11:53 AM, Eduard Bondarenko <eduardbpy at gmail.com>
wrote:

> Well, Perl also dynamic language, but except the usage of the additional
> analyser tools he provides 'use warning' directive and many Perl's
> developers use this feature.
>
> Why you do not want to add such feature in Python ?
>

I don't think anyone is saying that pre-run-time static analysis isn't
useful.

And I'd bet that most of the folks on this list use some or all of the
tools mentioned.

So the question is -- should some small subset of such analysis be built-in
to the interpreter? -- and I. for one, don't think it should, at least at
this point.

Seems very strange that Python checks ordinary blocks like 'if', 'else',
> 'def' statement and others..We could just have a big amount of tools to
> check this.
>

Python checks syntax before it compiles (or while it compiles), because,
well, it literally can't compile the code without correct syntax.  It
doesn't check things like names existing because those names could  be
created at run time.

If a line of code never runs during testing, then it can only be assumed to
be incorrect.

If it does run during tests, then the really quick and easy stuff like
mis-typed names will be caught right away. As there is no long compile-link
step, there isn't much to be gained by the interpreter catching these
things (that may not actually be errors) in the compile step.

-Chris


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/a9f4d95a/attachment.html>

From eduardbpy at gmail.com  Tue Feb 10 22:23:22 2015
From: eduardbpy at gmail.com (Eduard Bondarenko)
Date: Tue, 10 Feb 2015 23:23:22 +0200
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CALGmxE+yTWLD2CfJR4iUiqTSJi=2d98VKpb1zghn_adrxo_aZg@mail.gmail.com>
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
 <CALGmxELjzWmNeUSO+yJ9wQ6OyC5FvMzJTbAs=K0KUTG1TRtRDg@mail.gmail.com>
 <CANc-5UwbueT-rGwjweUKV3Et-s-rFv5-Pb8m2scxc7X7g8xttA@mail.gmail.com>
 <CALGmxEJf36BY=ozT3ZqMApBcYSCy_HT+PCrqMigm0UmwbHcrQQ@mail.gmail.com>
 <CAGJTDyt67S2-GUHU6-a=q7d6P2KHzj8AK0ZJSMbT7505K8hXvw@mail.gmail.com>
 <CALGmxE+yTWLD2CfJR4iUiqTSJi=2d98VKpb1zghn_adrxo_aZg@mail.gmail.com>
Message-ID: <CAGJTDyvSOCN+DYB54Ucg=1VXbHWsV7b_EOW-O=ZvV70fHa0GXA@mail.gmail.com>

> First of all, Python is not a "he". Python is an "it".

Thank you, I will remember.

> Seems very strange that Python checks ordinary blocks like 'if', 'else',
> 'def' statement and others..We could just have a big amount of tools to
> check this.

> So now you're arguing for Python to stop checking the Syntax of the
> program for errors? I'm confused.

I am just kidding.

Well, I understood.

Thank you all for your participation.


2015-02-10 23:12 GMT+02:00 Chris Barker <chris.barker at noaa.gov>:

> On Tue, Feb 10, 2015 at 11:53 AM, Eduard Bondarenko <eduardbpy at gmail.com>
> wrote:
>
>> Well, Perl also dynamic language, but except the usage of the additional
>> analyser tools he provides 'use warning' directive and many Perl's
>> developers use this feature.
>>
>> Why you do not want to add such feature in Python ?
>>
>
> I don't think anyone is saying that pre-run-time static analysis isn't
> useful.
>
> And I'd bet that most of the folks on this list use some or all of the
> tools mentioned.
>
> So the question is -- should some small subset of such analysis be
> built-in to the interpreter? -- and I. for one, don't think it should, at
> least at this point.
>
> Seems very strange that Python checks ordinary blocks like 'if', 'else',
>> 'def' statement and others..We could just have a big amount of tools to
>> check this.
>>
>
> Python checks syntax before it compiles (or while it compiles), because,
> well, it literally can't compile the code without correct syntax.  It
> doesn't check things like names existing because those names could  be
> created at run time.
>
> If a line of code never runs during testing, then it can only be assumed
> to be incorrect.
>
> If it does run during tests, then the really quick and easy stuff like
> mis-typed names will be caught right away. As there is no long compile-link
> step, there isn't much to be gained by the interpreter catching these
> things (that may not actually be errors) in the compile step.
>
> -Chris
>
>
> --
>
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R            (206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115       (206) 526-6317   main reception
>
> Chris.Barker at noaa.gov
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/db377593/attachment.html>

From chris.barker at noaa.gov  Tue Feb 10 22:23:58 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Tue, 10 Feb 2015 13:23:58 -0800
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
Message-ID: <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>

On Tue, Feb 10, 2015 at 1:01 PM, Spencer Brown <spencerb21 at live.com> wrote:

> Basically, in any location where a variable name is provided for
> assignment (tuple unpacking, for loops, function parameters, etc) a
> variable name can be replaced by '...' to indicate that that value will not
> be used. This would be syntactically equivalent to del-ing the variable
> immediately after, except that the assignment never needs to be done.
>
> Examples:
> >>> tup =  ('val1', 'val2', (123, 456, 789))
> >>> name, ..., (..., value, ...) = tup
>

I generally handle this by using a really simple no-meaning name, like"_" :

name, _, (_, value, __ ) = tup

granted, the _ sticks around and keeps the object alive, which may cause
issues. But is it often a big deal?

-Chris



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/6d3e60c0/attachment-0001.html>

From shoyer at gmail.com  Tue Feb 10 22:24:51 2015
From: shoyer at gmail.com (Stephan Hoyer)
Date: Tue, 10 Feb 2015 13:24:51 -0800
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
Message-ID: <CAEQ_TveAXZnYrP+NFO-3ssUOdLdj7xwbhqcu=r3yLi15WucCFg@mail.gmail.com>

The usual convention for unused results is to assign to the variable "_".
That's actually two less characters than "...", and has the advantage of
requiring no new syntax or explicit support. Is there any particular reason
why that would not be suitable for these use cases? It looks like _ could
be a drop in replacement for ... in your examples.

Stephan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/790f1693/attachment.html>

From ckaynor at zindagigames.com  Tue Feb 10 22:26:11 2015
From: ckaynor at zindagigames.com (Chris Kaynor)
Date: Tue, 10 Feb 2015 13:26:11 -0800
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
Message-ID: <CALvWhxvsR=uMTyxGBdTZc=K8=qQF8rqDjPxWB3ZGdingLE-CkA@mail.gmail.com>

On Tue, Feb 10, 2015 at 1:01 PM, Spencer Brown <spencerb21 at live.com> wrote:
> Basically, in any location where a variable name is provided for assignment (tuple unpacking, for loops, function parameters, etc) a variable name can be replaced by '...' to indicate that that value will not be used. This would be syntactically equivalent to del-ing the variable immediately after, except that the assignment never needs to be done.

The canonical way to do this is to use the variable "_" (some guides
recommend "__" for various reasons) for those names. Note that it is
perfectly valid to assign to the same name multiple times in a
statement.

>
> Examples:
>>>> tup =  ('val1', 'val2', (123, 456, 789))
>>>> name, ..., (..., value, ...) = tup
> instead of:
>>>> name = tup[0]
>>>> value = tup[2][1]
> This shows slightly more clearly what format the tuple will be in, and provides some additional error-checking - the unpack will fail if the tuple is a different size or format.

Or, as it already works: "name, _, (_, value, _) = tup". There is not
really any benefit to the proposed syntax.

>
>>>> for ... in range(100):
>>>>    pass
> This particular case could be optimised by checking to see if the iterator has a len() method, and if so directly looping that many times in C instead of executing the __next__() method repeatedly.

Same here. "for _ in range(100)". Note, however, this does not use an
optimized form (_ could be referenced in the loop). I'm not sure how
much gain you would have in many cases from the optimization, and
attempting such an optimization could, in fact, slow stuff down in
some cases (extra look-ups and branches). Ultimately, such would
require profiling of the change, and also considerations as to whether
it could result in unexpected behavior.
>
>>>> for i, ... in enumerate(object):
> is equivalent to:
>>>> for i in range(len(object)):
> The Ellipse form is slightly more clear and works for objects without len().

As above: "for i, _ in enumerate(object):" already works. Again, there
is not really any benefit to the proposed syntax.

>
>>>> first, second, *... = iterator
> In the normal case, this would extract the first two values, and exhaust the rest of the iterator. Perhaps this could be special-cased so the iterator is left alone, and the 3rd+ values are still available.

I would be much more tempted to use:
    first = next(iterator)
    second = next(iterator)
for the case where I don't want to exhaust the iterator. I feel it is
much clearer as to the meaning.

>
>>>> def function(param1, ..., param3, key1=True, key2=False, **...):
>>>>     pass
> This could be useful when overriding functions in a subclass, or writing callback functions. Here we don't actually need the 2nd argument. The '**...' would be the only allowable form of Ellipse in keyword arguments, and just allows any other keywords to be given to the function.

Here is the first case that the "_" trick on its own does not work as
you cannot use the same argument multiple times in a function
definition. I would imagine this has to do with named argument
passing. Similar tricks could still be used, however, by merely
appending numbers: "def function(param1, _0, param3, key1=True,
key2=False, **_1)".

In the case of matching signatures, I would be extremely cautious
about renaming any arguments or eliminating keyword arguments due to
the ability to pass any arguments by name.

For example, in the example provided (presuming logic in the naming of
the parameters in the base class and eliminating the ** part), a valid
call could look like: function(param3=a, param1=b, param2=c), which
would result in errors if the parameters got renamed.

From random832 at fastmail.us  Tue Feb 10 22:33:20 2015
From: random832 at fastmail.us (random832 at fastmail.us)
Date: Tue, 10 Feb 2015 16:33:20 -0500
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
Message-ID: <1423604000.371468.225837961.394D3535@webmail.messagingengine.com>

On Tue, Feb 10, 2015, at 16:23, Chris Barker wrote:
> granted, the _ sticks around and keeps the object alive, which may cause
> issues. But is it often a big deal?

Is cpython capable of noticing that the variable is never referenced (or
is only assigned) after this point and dropping it early? Might be a
good improvement to add if not.

From abarnert at yahoo.com  Tue Feb 10 22:35:37 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Tue, 10 Feb 2015 13:35:37 -0800
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
Message-ID: <80D4BACE-875E-40B6-AED9-0A51A7FB161D@yahoo.com>

On Feb 10, 2015, at 13:01, Spencer Brown <spencerb21 at live.com> wrote:

> Basically, in any location where a variable name is provided for assignment (tuple unpacking, for loops, function parameters, etc) a variable name can be replaced by '...' to indicate that that value will not be used.

The idiomatic way to do this has always been to assign to _. All of your examples work today with _, and I don't think they look significantly better with .... (Of course _ has the problem that when it's being used for gettext in an app, it can't be used for the disposable variable in the same app, but people have dealt with that for decades without any major complaints.)

So the only real substance to your proposal is that it opens up the possibility for compiler optimizations. But most of these seem like pretty narrow possibilities. For example, you suggest that "for ... in range(100):" could be optimized to a simple C loop. But "for i in range(100):" could just as easily be optimized to a simple C loop. if you can prove that range is still the builtin and therefore this yields exactly 100 times so you can skip the __next__ calls, surely you can also prove that it yields exactly the first 100 ints, so you can still skip the __next__ calls even if you do want those ints.

But it's pretty hard for a static optimizer to prove that range is still range, so you really can't skip the __next__ even in the ellipsis case. 

Either way, the only thing you're saving is an assignment--an extra addref/decref pair and a pointer copy. That's not _nothing_, but it's not that big a deal compared to, say, a __next__ call, or a typical loop body. And if you really need to remove those, you could pretty easily remove dead local assignments, which would optimize everyone's existing code automatically, instead of requiring them to write new code that's incompatible with 3.4 to get the benefit. Presumably the reason CPython doesn't do this is that the small benefit has never been worth the time it would take for anyone to implement and maintain the optimization.


> This would be syntactically equivalent to del-ing the variable immediately after, except that the assignment never needs to be done.
> 
> Examples:
>>>> tup =  ('val1', 'val2', (123, 456, 789))
>>>> name, ..., (..., value, ...) = tup
> instead of:
>>>> name = tup[0]
>>>> value = tup[2][1]
> This shows slightly more clearly what format the tuple will be in, and provides some additional error-checking - the unpack will fail if the tuple is a different size or format.
> 
>>>> for ... in range(100):
>>>>   pass
> This particular case could be optimised by checking to see if the iterator has a len() method, and if so directly looping that many times in C instead of executing the __next__() method repeatedly.
> 
>>>> for i, ... in enumerate(object):
> is equivalent to:
>>>> for i in range(len(object)):
> The Ellipse form is slightly more clear and works for objects without len(). 
> 
>>>> first, second, *... = iterator
> In the normal case, this would extract the first two values, and exhaust the rest of the iterator. Perhaps this could be special-cased so the iterator is left alone, and the 3rd+ values are still available.
> 
>>>> def function(param1, ..., param3, key1=True, key2=False, **...):
>>>>    pass
> This could be useful when overriding functions in a subclass, or writing callback functions. Here we don't actually need the 2nd argument. The '**...' would be the only allowable form of Ellipse in keyword arguments, and just allows any other keywords to be given to the function.
> 
> - Spencer
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From abarnert at yahoo.com  Tue Feb 10 22:45:02 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Tue, 10 Feb 2015 13:45:02 -0800
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <1423604000.371468.225837961.394D3535@webmail.messagingengine.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <1423604000.371468.225837961.394D3535@webmail.messagingengine.com>
Message-ID: <156864E6-0C0E-430B-9390-5DD7623E6571@yahoo.com>

On Feb 10, 2015, at 13:33, random832 at fastmail.us wrote:

> On Tue, Feb 10, 2015, at 16:23, Chris Barker wrote:
>> granted, the _ sticks around and keeps the object alive, which may cause
>> issues. But is it often a big deal?
> 
> Is cpython capable of noticing that the variable is never referenced (or
> is only assigned) after this point and dropping it early? Might be a
> good improvement to add if not.

This seems pretty simple in concept, but a lot of code. Currently CPython only runs a keyhole optimizer step; you'd need a whole-function optimizer step that runs after each function definition is complete. But then all it has to do is look for any locals that are never referenced and remove all assignments to them, skipping any functions that use eval of course. (This wouldn't get rid of unnecessary repeated assignments to global, closure, item, or attr, but that wouldn't matter for this use case.)

To play with this and see how much benefit you'd get, someone could write this pretty simply from Python with byteplay (especially since you don't have to write anything fully general and safe for the test) and run some timeits. (Of course to measure how much it slows down the compiler, and how much of your coding time it takes, and how risky the code looks, you'd have to actually do the work, but knowing how much benefit you're talking about may be enough information to guess that the cost isn't worth it.)

From Nikolaus at rath.org  Tue Feb 10 22:51:06 2015
From: Nikolaus at rath.org (Nikolaus Rath)
Date: Tue, 10 Feb 2015 13:51:06 -0800
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CAGJTDyt67S2-GUHU6-a=q7d6P2KHzj8AK0ZJSMbT7505K8hXvw@mail.gmail.com>
 (Eduard Bondarenko's message of "Tue, 10 Feb 2015 21:53:17 +0200")
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
 <CALGmxELjzWmNeUSO+yJ9wQ6OyC5FvMzJTbAs=K0KUTG1TRtRDg@mail.gmail.com>
 <CANc-5UwbueT-rGwjweUKV3Et-s-rFv5-Pb8m2scxc7X7g8xttA@mail.gmail.com>
 <CALGmxEJf36BY=ozT3ZqMApBcYSCy_HT+PCrqMigm0UmwbHcrQQ@mail.gmail.com>
 <CAGJTDyt67S2-GUHU6-a=q7d6P2KHzj8AK0ZJSMbT7505K8hXvw@mail.gmail.com>
Message-ID: <87fvadsaet.fsf@thinkpad.rath.org>

Eduard Bondarenko <eduardbpy-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> writes:
> Well, Perl also dynamic language, but except the usage of the additional
> analyser tools he provides 'use warning' directive and many Perl's
> developers use this feature.
>
> Why you do not want to add such feature in Python ?

One reason would be because it doesn't exist in Perl either:

$ cat test.pl 
use warnings;
$foo = <STDIN>;
if($foo eq "bar") {
    print("hello");
} else {
   print(undefined_variable)
}

$ perl test.pl 
Name "main::undefined_variable" used only once: possible typo at test.pl line 7.
foo
print() on unopened filehandle undefined_variable at test.pl line 6, <STDIN> line 1.


Note that you only get the warning about the undefined variable when the
code actually enters that branch.

You are getting a warning at compile time because you used the variable
only once, but I don't think that's what you'd like to add to Python. Or
is it?

Best,
-Nikolaus


-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             ?Time flies like an arrow, fruit flies like a Banana.?

From ckaynor at zindagigames.com  Tue Feb 10 22:51:04 2015
From: ckaynor at zindagigames.com (Chris Kaynor)
Date: Tue, 10 Feb 2015 13:51:04 -0800
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <1423604000.371468.225837961.394D3535@webmail.messagingengine.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <1423604000.371468.225837961.394D3535@webmail.messagingengine.com>
Message-ID: <CALvWhxtyPU7SWRoC2zs1eQNWy6oisvy2Rc2+BVaXzZvU_5sU6w@mail.gmail.com>

On Tue, Feb 10, 2015 at 1:33 PM,  <random832 at fastmail.us> wrote:
> Is cpython capable of noticing that the variable is never referenced (or
> is only assigned) after this point and dropping it early? Might be a
> good improvement to add if not.

I'm not sure that such would be viable in the general case, as
assignment of classes could have side-effects. Consider the following
code (untested):

class MyClass:
    def __init__(self):
        global a
        a = 1
    def __del__(self):
        global a
        a = 2

def myFunction():
    b = MyClass() # Note that b is not referenced anywhere - it is
local and not used in the scope or a internal scope.
    print(a)

myFunction()

Without the optimization (existing behavior), this will print 1. With
the optimization, MyClass could be garbage collected early (as it
cannot be referenced), and thus the code would print 2 (in CPython,
but it could also be a race condition or otherwise undetermined
whether 1 or 2 will be printed).

I'm not saying that it is a good idea to write such a class, but a
change to how variables are assigned would be backwards-incompatible,
and could result in very odd behavior in somebody's code.

Chris

From abarnert at yahoo.com  Tue Feb 10 23:09:11 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Tue, 10 Feb 2015 14:09:11 -0800
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CAN-Kwu23YbY-LhdUpTpxvm4CfM4KstQEw+v5UENC2MbGvL-zjQ@mail.gmail.com>
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
 <CALGmxELjzWmNeUSO+yJ9wQ6OyC5FvMzJTbAs=K0KUTG1TRtRDg@mail.gmail.com>
 <CANc-5UwbueT-rGwjweUKV3Et-s-rFv5-Pb8m2scxc7X7g8xttA@mail.gmail.com>
 <CALGmxEJf36BY=ozT3ZqMApBcYSCy_HT+PCrqMigm0UmwbHcrQQ@mail.gmail.com>
 <CAGJTDyt67S2-GUHU6-a=q7d6P2KHzj8AK0ZJSMbT7505K8hXvw@mail.gmail.com>
 <CAN-Kwu23YbY-LhdUpTpxvm4CfM4KstQEw+v5UENC2MbGvL-zjQ@mail.gmail.com>
Message-ID: <C85A1679-DC25-42FC-8514-74E3BB744402@yahoo.com>

Eduard, if you're going to suggest changes to Python, you really ought to first learn the current version, 3.4, and use it in your examples, rather than 2.7. (And use PEP8 style, too.)

In simple cases like this, it doesn't make any real difference, and people can understand what you're suggesting--but it may still make people subconsciously less receptive (as in, "if he's not even willing to keep up with Python and see what we've improved in the last decade, what are the chances he'll have any good insight that's worth paying attention to that we haven't already thought of?").

Also, you keep comparing Python to, along with perl, C++. But in C++, many people _do_ use linters and static analyzers. Whether things like the cpplint, clang SA, the Xcode/Visual Studio/Eclipse/etc. IDE features, etc. count as "third party" is kind of a tricky question (for many developers, "comes with VS" or "comes with Xcode" probably means "not third party"), but I don't think it's really an important question; they're features that aren't part of the language, or of every implementation, that many people rely on--just like pylint, flakes8, the PyCharm IDE features, etc. in Python. They can catch things that are usually but not always wrong, or legal but stylistically bad, or illegal but a conforming compiler isn't allowed to issue an error, or expensive to check for, etc.

On Feb 10, 2015, at 12:27, Ian Cordasco <graffatcolmingov at gmail.com> wrote:

> On Tue, Feb 10, 2015 at 1:53 PM, Eduard Bondarenko <eduardbpy at gmail.com> wrote:
>> Well, Perl also dynamic language, but except the usage of the additional
>> analyser tools he provides 'use warning' directive and many Perl's
>> developers use this feature.
> 
> First of all, Python is not a "he". Python is an "it".

I don't know; as the mother of other languages like Boo, maybe Python as is "she". Like the mighty Crimson Permanent Assurance, she sails, striking terror into the hearts of all those born from the C, especially off the coast of Java.

From rosuav at gmail.com  Tue Feb 10 23:15:30 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Wed, 11 Feb 2015 09:15:30 +1100
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <C85A1679-DC25-42FC-8514-74E3BB744402@yahoo.com>
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
 <CALGmxELjzWmNeUSO+yJ9wQ6OyC5FvMzJTbAs=K0KUTG1TRtRDg@mail.gmail.com>
 <CANc-5UwbueT-rGwjweUKV3Et-s-rFv5-Pb8m2scxc7X7g8xttA@mail.gmail.com>
 <CALGmxEJf36BY=ozT3ZqMApBcYSCy_HT+PCrqMigm0UmwbHcrQQ@mail.gmail.com>
 <CAGJTDyt67S2-GUHU6-a=q7d6P2KHzj8AK0ZJSMbT7505K8hXvw@mail.gmail.com>
 <CAN-Kwu23YbY-LhdUpTpxvm4CfM4KstQEw+v5UENC2MbGvL-zjQ@mail.gmail.com>
 <C85A1679-DC25-42FC-8514-74E3BB744402@yahoo.com>
Message-ID: <CAPTjJmqUY_xdoP2paSfSz2swPdvO5MNNGcJrn_MzUPf6wYW0Dg@mail.gmail.com>

On Wed, Feb 11, 2015 at 9:09 AM, Andrew Barnert
<abarnert at yahoo.com.dmarc.invalid> wrote:
> (for many developers, "comes with VS" or "comes with Xcode" probably means "not third party")

For many, "comes with PyCharm" probably has the same feeling as
regards Python code.

ChrisA

From rosuav at gmail.com  Tue Feb 10 23:28:46 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Wed, 11 Feb 2015 09:28:46 +1100
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <1423604000.371468.225837961.394D3535@webmail.messagingengine.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <1423604000.371468.225837961.394D3535@webmail.messagingengine.com>
Message-ID: <CAPTjJmoBQu-9co-fTUN6PWU0LQRamAs7SgRzkZiXgmG_Oe5kPQ@mail.gmail.com>

On Wed, Feb 11, 2015 at 8:33 AM,  <random832 at fastmail.us> wrote:
> On Tue, Feb 10, 2015, at 16:23, Chris Barker wrote:
>> granted, the _ sticks around and keeps the object alive, which may cause
>> issues. But is it often a big deal?
>
> Is cpython capable of noticing that the variable is never referenced (or
> is only assigned) after this point and dropping it early? Might be a
> good improvement to add if not.

CPython? Seems pretty unlikely. Other Pythons, like PyPy? Maybe... but
only for types without __del__. Consider:

class ReportDestruction:
    def __init__(self, msg): self.msg = msg
    def __del__(self): print("DESTROY:", self.msg)

def f():
    x = ReportDestruction("on leaving f()")
    ReportDestruction("Immediate")
    ... other code ...

In current CPython, the second reporter will speak immediately, and
the first one will speak when the function terminates, unless the
"other code" creates a refloop, in which case it'll speak when the
loop's broken (eg by gc.collect()). It's perfectly possible for object
lifetimes to be _extended_. However, I don't know of any way that you
could - or would ever want to - _shorten_ the lifetime of an object.
There'll be a lot of code out there that assumes that objects stick
around until the end of the function.

For something like PyPy, which already has special cases for different
data types, it's reasonable that some might be disposed of early
(especially simple strings and integers, etc). But I doubt CPython
will get anything like this in the near future. It's a dangerous
optimization.

ChrisA

From abarnert at yahoo.com  Tue Feb 10 23:44:34 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Tue, 10 Feb 2015 14:44:34 -0800
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <CALvWhxtyPU7SWRoC2zs1eQNWy6oisvy2Rc2+BVaXzZvU_5sU6w@mail.gmail.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <1423604000.371468.225837961.394D3535@webmail.messagingengine.com>
 <CALvWhxtyPU7SWRoC2zs1eQNWy6oisvy2Rc2+BVaXzZvU_5sU6w@mail.gmail.com>
Message-ID: <0A1B1A21-5AD5-44B5-BC59-3104F1AC0B36@yahoo.com>

On Feb 10, 2015, at 13:51, Chris Kaynor <ckaynor at zindagigames.com> wrote:

> On Tue, Feb 10, 2015 at 1:33 PM,  <random832 at fastmail.us> wrote:
>> Is cpython capable of noticing that the variable is never referenced (or
>> is only assigned) after this point and dropping it early? Might be a
>> good improvement to add if not.
> 
> I'm not sure that such would be viable in the general case, as
> assignment of classes could have side-effects. Consider the following
> code (untested):
> 
> class MyClass:
>    def __init__(self):
>        global a
>        a = 1
>    def __del__(self):
>        global a
>        a = 2
> 
> def myFunction():
>    b = MyClass() # Note that b is not referenced anywhere - it is
> local and not used in the scope or a internal scope.
>    print(a)
> 
> myFunction()
> 
> Without the optimization (existing behavior), this will print 1. With
> the optimization, MyClass could be garbage collected early (as it
> cannot be referenced), and thus the code would print 2 (in CPython,
> but it could also be a race condition or otherwise undetermined
> whether 1 or 2 will be printed).

Good point.

But does Python actually guarantee that a variable will be alive until the end of the scope even if never referenced?

That seems like the kind of thing Jython and PyPy would ensure for compatibility reasons even if it's not required (the same way, e.g., Jython ensures GIL-like sequential consistency on __setattr__ calls to normal dict-based classes), so it might be a bad idea to change it even if it's legal. But then it's more a practical question of how much working code would break, what the benefit is, etc.

Of course this same consideration also affects the original proposal. I can very easily write a class that does something that affects the global environment on __next__ but that's still a valid Sequence, so skipping the __next__ calls is even more dangerous than skipping the assignment...

From abarnert at yahoo.com  Tue Feb 10 23:54:11 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Tue, 10 Feb 2015 14:54:11 -0800
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CAPTjJmqUY_xdoP2paSfSz2swPdvO5MNNGcJrn_MzUPf6wYW0Dg@mail.gmail.com>
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
 <CALGmxELjzWmNeUSO+yJ9wQ6OyC5FvMzJTbAs=K0KUTG1TRtRDg@mail.gmail.com>
 <CANc-5UwbueT-rGwjweUKV3Et-s-rFv5-Pb8m2scxc7X7g8xttA@mail.gmail.com>
 <CALGmxEJf36BY=ozT3ZqMApBcYSCy_HT+PCrqMigm0UmwbHcrQQ@mail.gmail.com>
 <CAGJTDyt67S2-GUHU6-a=q7d6P2KHzj8AK0ZJSMbT7505K8hXvw@mail.gmail.com>
 <CAN-Kwu23YbY-LhdUpTpxvm4CfM4KstQEw+v5UENC2MbGvL-zjQ@mail.gmail.com>
 <C85A1679-DC25-42FC-8514-74E3BB744402@yahoo.com>
 <CAPTjJmqUY_xdoP2paSfSz2swPdvO5MNNGcJrn_MzUPf6wYW0Dg@mail.gmail.com>
Message-ID: <48F578E9-009F-4217-8005-9C019925B5B6@yahoo.com>

On Feb 10, 2015, at 14:15, Chris Angelico <rosuav at gmail.com> wrote:

> On Wed, Feb 11, 2015 at 9:09 AM, Andrew Barnert
> <abarnert at yahoo.com.dmarc.invalid> wrote:
>> (for many developers, "comes with VS" or "comes with Xcode" probably means "not third party")
> 
> For many, "comes with PyCharm" probably has the same feeling as
> regards Python code.

Sure, and there are probably others for whom "comes with Anaconda" does (one user on Stack Overflow refused to accept that IPython wasn't part of standard Python, although he knew the qtconsole mode wasn't). So I guess you're right, it's an even trickier question than I said. :)

From tjreedy at udel.edu  Wed Feb 11 00:40:41 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Tue, 10 Feb 2015 18:40:41 -0500
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CAGJTDyus2Nua4U2NANjaoW_gfvHyChR7aDSs_7gZoUP=Mr0ffQ@mail.gmail.com>
References: <CAGJTDyus2Nua4U2NANjaoW_gfvHyChR7aDSs_7gZoUP=Mr0ffQ@mail.gmail.com>
Message-ID: <mbe4tu$mjb$1@ger.gmane.org>

On 2/10/2015 11:04 AM, Eduard Bondarenko wrote:

> Actually, this is my first experience for writing into the community
> like this, so excuse me if you found some mistakes or oddities.

Code should be plain plain-text, without markup such as *s around 
identifiers.

> #!/usr/bin/python
>
> *word* = raw_input("Enter line : ")
>
> if *word* == "hello":
>    print ("You wrote \'hello\'")
> else:
>    if *world* == "buy": #Error! should be word not world
>      print "Buy"

elif world == "buy": #Error! should be word not world

> else:
> *iamnotfunction* #Also error
>
> This script contains two errors. And in both cases we will know about it
> at runtime. And the most worst thing is that you will not know about
> these errors until someone enters anything other than the "hello" word..

The test suite should test each branch each way.
https://pypi.python.org/pypi/coverage/3.7.1
makes this easy.

-- 
Terry Jan Reedy


From tjreedy at udel.edu  Wed Feb 11 01:27:38 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Tue, 10 Feb 2015 19:27:38 -0500
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CAOvn4qjZppYpF=SZ+Ba9Qu3T094OiqWkXb2N194dLi7EeQc87Q@mail.gmail.com>
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
 <CAOvn4qjZppYpF=SZ+Ba9Qu3T094OiqWkXb2N194dLi7EeQc87Q@mail.gmail.com>
Message-ID: <mbe7lu$1l7$1@ger.gmane.org>

On 2/10/2015 12:31 PM, Thomas Kluyver wrote:

> Maybe we should consider including functionality like that in IDLE, but
> that would require giving one static analysis tool the official blessing
> of inclusion in the standard library, which would probably be
> controversial.

There was a tracker proposal to include one particular checker in Idle. 
  I rejected that for multiple reasons and suggested instead a framework 
for letting a user run *any* checker on editor contents.  This expanded 
proposal was part of last summer's GSOC project but needs a bit more work.

The inclusion of pip makes downloading 3rd party modules easier than it 
used to be.

-- 
Terry Jan Reedy


From stephen at xemacs.org  Wed Feb 11 03:30:04 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Wed, 11 Feb 2015 11:30:04 +0900
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
Message-ID: <87pp9hnpsj.fsf@uwakimon.sk.tsukuba.ac.jp>

Eduard Bondarenko writes:

 > So I can type whatever I want in else block and someone can detect
 > it after the several years..

That's a shame.  However, such errors are not the only way for severe
errors to occur in rarely used branches.  As long as your static tool
can't handle *all* bugs, there's a cost/benefit question of whether
the (1) programmer effort to add and maintain the functionality
(including updating it for new features, especially keywords, and
formerly illegal syntax), and (2) the additional time to run it are
justified by the likely benefit in reduced bugginess.

The cost/benefit issue is of course always present, but it becomes
discouraging here because the only way to detect non-syntax bugs
before runtime is careful code review, and that is likely to catch
most syntax bugs -- the benefit to additional compile-time checking is
small.

Note that I'm not sure that the Python compiler currently knows enough
to provide the warnings you want.  Consider:

    def doit():
        print(a)
    a = "It works!"
    doit()

Because Python is a dynamic language, the names 'a' and 'doit' (and
'print', for that matter) must be looked up at runtime, returning
objects.  This means that compiling that program does not require that
the compiler know whether 'a', 'doit', and 'print' are defined!  So
adding those warnings might require major changes to the compiler.

 > And the first thing that I have to do programming in Python is to
 > download analyser tool..

The SEI would say "No no no!  The first thing you need to do is
*review your code*!"  IDE developers would say "why aren't you using
identifier completion?", etc, etc.  This "need" for warnings from the
python interpreter is very much a personal preference on your part.
There's nothing wrong with having such preferences, and nothing wrong
with expressing them here.  You might come up with something with wide
applicability to new users or to less frequent users that the heavy
users who are the bulk of developers would miss.

However, I think that for both typical new users (at least those whose
job title is marketer or sociologist or lawyer) and less frequent
users the non-syntax G'IGO problem (where G' = good and G = garbage)
is much larger than the occasional uncaught NameError.  Such users
don't branch as heavily as library module authors, typically branch
only for easily conceptualized cases or cases observed early in
development, and so often are quite well-served by "all errors are
fatal at runtime".  And of course this is all personal opinion, too,
but I suspect similar ones are shared by many core Python developers
(and I'm not one of those! so nothing I say is authoritative <wink />).

 > I understand the importance of static analyser tools, but in this
 > case (to check simple things like a variables and functions names)
 > it is like "To use a sledge-hammer to crack a nut.".

Again, this is a personal preference (and again, there's nothing wrong
with having one or expressing it).  The CPython community has
generally agreed that the virtual machine (including the compiler and
interpreter) should be kept as simple and clean of extra features as
possible.  A change to the compiler merely to warn about suspicious
constructs is considered the "sledgehammer" here.  On the other hand,
because the constructs used by the compiler are exposed as ordinary
classes, writing separate tools is fairly easy.

Also, I don't really understand your sledgehammer analogy.  It's true
that catching undefined names could be done separately, but running a
static analysis tool will catch many other kinds of bugs that any
given warning filter would not.  There are many simple (one-pass,
line-by-line) "linters" that only look within a single logical line
for suspicious constructs, others that will check things like
assignment before use.  You can pick the level of analysis you like.

 > PS. Within C++
                 ^ *** Warning *** Every line of this program contains
                   obscure and DANGEROUS constructs!

<wink />

 > or other languages I should use static analyser tools to check real
 > bug-prone situation, but in new, modern Python I should use third-
 > party tools to check the real simple errata.

I suppose from your mention of "third-party" tools in the case of C++
you mean "vendor-provided" static analyzer tools, though you didn't
write that.  The answer to the implied question is "yes and no".  C++
compiler optimizations are notoriously buggy and backward-
incompatible, and implemented differently by each vendor.  And there's
a tradition (maintained for "political" reasons in the case of GCC,
leading to RMS's resistance to a plugin architecture and exporting the
AST) of packing all functionality into a single command, even a single
program.  Vendor-specific, vendor-implemented checkers make a heck of
a lot of sense.  By contrast with GCC, LLVM's C and C++ tools look a
lot like Python, with competing implementations of quite a few modules
in and out of the LLVM project proper.

Now, in Python as I mentioned, it's relatively easy to write checkers,
and there are many of them.  There is some core agreement on the
required features, but the degree of agreement is considered
insufficient to justify adding a module to the standard library,
especially as many of the various checkers are in active development
getting new features.  This isn't a blanket opposition; years ago
support for "function annotations" was added, and just recently a
specific syntax for type hinting has been decided (pretty much) and
the PEP will probably be approved shortly, addressing both improved
diagnostics and optimization needs.

Finally (and I think nobody has mentioned this yet), downloading third-
party tools has always been fairly easy in Python due to the package
repository PyPI, and continuous effort has been expended over the
years on making it easier.  In fact, for many practical purposes the
"batteries included" philosophy has become impractical to support.  So
before you can begin to write a program, you need to download a module
(and its documentation) to support functionality needed by your
program.  This is true for almost everybody nowadays.  In response,
the download and installation tool "pip" was recently added to the
core distribution of both Python 2 and Python 3.  That kind of
generally useful effort is valued more in Python than simple addition
of features that most likely would not be used by most Python
programmers, not even implicitly in the standard library.  And the
community deliberately aims at such improvements.

Your mileage may vary, of course, but having observed the community
for over a decade now I believe the above to be a fairly accurate
expression of some of the core values of this community.

Regards,

Steve

From storchaka at gmail.com  Wed Feb 11 06:48:29 2015
From: storchaka at gmail.com (Serhiy Storchaka)
Date: Wed, 11 Feb 2015 07:48:29 +0200
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <80D4BACE-875E-40B6-AED9-0A51A7FB161D@yahoo.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <80D4BACE-875E-40B6-AED9-0A51A7FB161D@yahoo.com>
Message-ID: <mbeqfd$rub$1@ger.gmane.org>

On 10.02.15 23:35, Andrew Barnert wrote:
>All of your examples work today with _, and I don't think they look 
significantly better with ....

Except unpacking of infinite sequence.



From ncoghlan at gmail.com  Wed Feb 11 08:00:56 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 11 Feb 2015 17:00:56 +1000
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <0A1B1A21-5AD5-44B5-BC59-3104F1AC0B36@yahoo.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <1423604000.371468.225837961.394D3535@webmail.messagingengine.com>
 <CALvWhxtyPU7SWRoC2zs1eQNWy6oisvy2Rc2+BVaXzZvU_5sU6w@mail.gmail.com>
 <0A1B1A21-5AD5-44B5-BC59-3104F1AC0B36@yahoo.com>
Message-ID: <CADiSq7c-q87s-s3-h4kYcTmVbHHsRyq_uXBfLJJHfW_6AM7UcQ@mail.gmail.com>

On 11 February 2015 at 08:44, Andrew Barnert
<abarnert at yahoo.com.dmarc.invalid> wrote:
> But does Python actually guarantee that a variable will be alive until the end of the scope even if never referenced?

Yes, due to the way namespace semantics are defined. Bound names are
always referenced from the local namespace (so locals() can return
them), even if nothing visible to the compiler looks them up. That
reference doesn't go away until the namespace does.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From ianlee1521 at gmail.com  Wed Feb 11 08:21:20 2015
From: ianlee1521 at gmail.com (Ian Lee)
Date: Tue, 10 Feb 2015 23:21:20 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
Message-ID: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>

I mentioned this on the python-dev list [1] originally as a +1 to someone
else suggesting the idea [2]. It also came up in a response to my post that
I can't seem to find in the archives, so I've quoted it below [3].

As the subject says, the idea would be to add a "+" and "+=" operator to
dict that would provide the following behavior:

>>> {'x': 1, 'y': 2} + {'z': 3}
{'x': 1, 'y': 2, 'z': 3}

With the only potentially non obvious case I can see then is when there are
duplicate keys, in which case the syntax could just be defined that last
setter wins, e.g.:

>>> {'x': 1, 'y': 2} + {'x': 3}
{'x': 3, 'y': 2}

Which is analogous to the example:

>>> new_dict = dict1.copy()
>>> new_dict.update(dict2)

With "+=" then essentially ending up being an alias for
``dict.update(...)``.

I'd be happy to champion this as a PEP if the feedback / public opinion
heads in that direction.


[1] https://mail.python.org/pipermail/python-dev/2015-February/138150.html
[2] https://mail.python.org/pipermail/python-dev/2015-February/138116.html
[3] John Wong --

> Well looking at just list
> a + b yields new list
> a += b yields modified a
> then there is also .extend in list. etc.
> so do we want to follow list's footstep? I like + because + is more
> natural to read. Maybe this needs to be a separate thread. I am actually
> amazed to remember dict + dict is not possible... there must be a reason
> (performance??) for this...


Cheers,

~ Ian Lee
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/e32c0f29/attachment.html>

From eduardbpy at gmail.com  Wed Feb 11 09:33:53 2015
From: eduardbpy at gmail.com (Eduard Bondarenko)
Date: Wed, 11 Feb 2015 10:33:53 +0200
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <87pp9hnpsj.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <CAGJTDys7m2dBTGYgGuhxiC_3mpr7hNgcRmCeiVrSLk9+Vzigiw@mail.gmail.com>
 <87pp9hnpsj.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <CAGJTDyudznEzF6YEbR5EZjfK7-SvHNJjQ_ZupuPsuuyBb7HDJA@mail.gmail.com>

Andrew,

Eduard, if you're going to suggest changes to Python, you really ought to
> first learn the current version, 3.4, and use it in your examples, rather
> than 2.7. (And use PEP8 style, too.)
>
> In simple cases like this, it doesn't make any real difference, and people
> can understand what you're suggesting--but it may still make people
> subconsciously less receptive (as in, "if he's not even willing to keep up
> with Python and see what we've improved in the last decade, what are the
> chances he'll have any good insight that's worth paying attention to that
> we haven't already thought of?").


I am agree with you. It was my fault.


2015-02-11 4:30 GMT+02:00 Stephen J. Turnbull <stephen at xemacs.org>:

> Eduard Bondarenko writes:
>
>  > So I can type whatever I want in else block and someone can detect
>  > it after the several years..
>
> That's a shame.  However, such errors are not the only way for severe
> errors to occur in rarely used branches.  As long as your static tool
> can't handle *all* bugs, there's a cost/benefit question of whether
> the (1) programmer effort to add and maintain the functionality
> (including updating it for new features, especially keywords, and
> formerly illegal syntax), and (2) the additional time to run it are
> justified by the likely benefit in reduced bugginess.
>
> The cost/benefit issue is of course always present, but it becomes
> discouraging here because the only way to detect non-syntax bugs
> before runtime is careful code review, and that is likely to catch
> most syntax bugs -- the benefit to additional compile-time checking is
> small.
>
> Note that I'm not sure that the Python compiler currently knows enough
> to provide the warnings you want.  Consider:
>
>     def doit():
>         print(a)
>     a = "It works!"
>     doit()
>
> Because Python is a dynamic language, the names 'a' and 'doit' (and
> 'print', for that matter) must be looked up at runtime, returning
> objects.  This means that compiling that program does not require that
> the compiler know whether 'a', 'doit', and 'print' are defined!  So
> adding those warnings might require major changes to the compiler.
>
>  > And the first thing that I have to do programming in Python is to
>  > download analyser tool..
>
> The SEI would say "No no no!  The first thing you need to do is
> *review your code*!"  IDE developers would say "why aren't you using
> identifier completion?", etc, etc.  This "need" for warnings from the
> python interpreter is very much a personal preference on your part.
> There's nothing wrong with having such preferences, and nothing wrong
> with expressing them here.  You might come up with something with wide
> applicability to new users or to less frequent users that the heavy
> users who are the bulk of developers would miss.
>
> However, I think that for both typical new users (at least those whose
> job title is marketer or sociologist or lawyer) and less frequent
> users the non-syntax G'IGO problem (where G' = good and G = garbage)
> is much larger than the occasional uncaught NameError.  Such users
> don't branch as heavily as library module authors, typically branch
> only for easily conceptualized cases or cases observed early in
> development, and so often are quite well-served by "all errors are
> fatal at runtime".  And of course this is all personal opinion, too,
> but I suspect similar ones are shared by many core Python developers
> (and I'm not one of those! so nothing I say is authoritative <wink />).
>
>  > I understand the importance of static analyser tools, but in this
>  > case (to check simple things like a variables and functions names)
>  > it is like "To use a sledge-hammer to crack a nut.".
>
> Again, this is a personal preference (and again, there's nothing wrong
> with having one or expressing it).  The CPython community has
> generally agreed that the virtual machine (including the compiler and
> interpreter) should be kept as simple and clean of extra features as
> possible.  A change to the compiler merely to warn about suspicious
> constructs is considered the "sledgehammer" here.  On the other hand,
> because the constructs used by the compiler are exposed as ordinary
> classes, writing separate tools is fairly easy.
>
> Also, I don't really understand your sledgehammer analogy.  It's true
> that catching undefined names could be done separately, but running a
> static analysis tool will catch many other kinds of bugs that any
> given warning filter would not.  There are many simple (one-pass,
> line-by-line) "linters" that only look within a single logical line
> for suspicious constructs, others that will check things like
> assignment before use.  You can pick the level of analysis you like.
>
>  > PS. Within C++
>                  ^ *** Warning *** Every line of this program contains
>                    obscure and DANGEROUS constructs!
>
> <wink />
>
>  > or other languages I should use static analyser tools to check real
>  > bug-prone situation, but in new, modern Python I should use third-
>  > party tools to check the real simple errata.
>
> I suppose from your mention of "third-party" tools in the case of C++
> you mean "vendor-provided" static analyzer tools, though you didn't
> write that.  The answer to the implied question is "yes and no".  C++
> compiler optimizations are notoriously buggy and backward-
> incompatible, and implemented differently by each vendor.  And there's
> a tradition (maintained for "political" reasons in the case of GCC,
> leading to RMS's resistance to a plugin architecture and exporting the
> AST) of packing all functionality into a single command, even a single
> program.  Vendor-specific, vendor-implemented checkers make a heck of
> a lot of sense.  By contrast with GCC, LLVM's C and C++ tools look a
> lot like Python, with competing implementations of quite a few modules
> in and out of the LLVM project proper.
>
> Now, in Python as I mentioned, it's relatively easy to write checkers,
> and there are many of them.  There is some core agreement on the
> required features, but the degree of agreement is considered
> insufficient to justify adding a module to the standard library,
> especially as many of the various checkers are in active development
> getting new features.  This isn't a blanket opposition; years ago
> support for "function annotations" was added, and just recently a
> specific syntax for type hinting has been decided (pretty much) and
> the PEP will probably be approved shortly, addressing both improved
> diagnostics and optimization needs.
>
> Finally (and I think nobody has mentioned this yet), downloading third-
> party tools has always been fairly easy in Python due to the package
> repository PyPI, and continuous effort has been expended over the
> years on making it easier.  In fact, for many practical purposes the
> "batteries included" philosophy has become impractical to support.  So
> before you can begin to write a program, you need to download a module
> (and its documentation) to support functionality needed by your
> program.  This is true for almost everybody nowadays.  In response,
> the download and installation tool "pip" was recently added to the
> core distribution of both Python 2 and Python 3.  That kind of
> generally useful effort is valued more in Python than simple addition
> of features that most likely would not be used by most Python
> programmers, not even implicitly in the standard library.  And the
> community deliberately aims at such improvements.
>
> Your mileage may vary, of course, but having observed the community
> for over a decade now I believe the above to be a fairly accurate
> expression of some of the core values of this community.
>
> Regards,
>
> Steve
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150211/c3eb3f67/attachment-0001.html>

From dennis at kaarsemaker.net  Wed Feb 11 10:03:11 2015
From: dennis at kaarsemaker.net (Dennis Kaarsemaker)
Date: Wed, 11 Feb 2015 10:03:11 +0100
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <CAGJTDyus2Nua4U2NANjaoW_gfvHyChR7aDSs_7gZoUP=Mr0ffQ@mail.gmail.com>
References: <CAGJTDyus2Nua4U2NANjaoW_gfvHyChR7aDSs_7gZoUP=Mr0ffQ@mail.gmail.com>
Message-ID: <1423645391.3477.5.camel@kaarsemaker.net>

Hi Eduard,

What you describe is not at all equivalent to 'use warnings' in perl.
'use warnings' is a directive to enable/disable all or certain warnings
perl code may emit. The equivalent of this exists in python:

https://docs.python.org/3.1/library/warnings.html

On di, 2015-02-10 at 18:04 +0200, Eduard Bondarenko wrote:
> Hello group,
> 
> my name is Eduard. I came into the Python community with an idea to
> add (implicitly or explicitly) 'use warnings' directive into the
> python's programs.
> 
> I think that everyone who worked with Perl understand what I am
> talking about. For the rest I will explain the idea.
> 
> Actually, this is my first experience for writing into the community
> like this, so excuse me if you found some mistakes or oddities. Also I
> do not know whether you are already talk about this topic..in any case
> - sorry.
> 
> So, imagine that you have a program:
> 
> #!/usr/bin/python
> 
> word = raw_input("Enter line : ")
> 
> if word == "hello":
>   print ("You wrote \'hello\'")
> else:
>   if world == "buy": #Error! should be word not world
>     print "Buy"
> else:
>   iamnotfunction #Also error
> 
> This script contains two errors. And in both cases we will know about
> it at runtime. And the most worst thing is that you will not know
> about these errors until someone enters anything other than the
> "hello" word..
> 
> Try and except blocks do not solve this problem. Within this approach
> we also receive problem at runtime.
> 
> What I propose ? I propose to add 'use warnings' directive. This
> directive will provide deeply verification. Like this:
> 
> #!/usr/bin/python
> 
> use warnings
> 
> word = raw_input("Enter line : ")
> 
> if word == "hello":
>   print ("You wrote \'hello\'")
> else:
>   if world == "buy": #Error! should be word not world
>     print "Buy"
> else:
>   iamnotfunction #Also error
> 
> Output:
> Use of uninitialized value world in  eq (==) at test.py line ..
> Useless use of a constant (iamnotfunction) in void context at test.py
> line ..
> 
> The user will see the output like this and the program will not start.
> 
> 
> To my mind the good idea is to explicitly set this directive. If
> developer does not want spend time for this checking, he can omit 'use
> warning' directive. Also it will not corrupt the existing programs.
> And developers will have a chance to add this line gradually.
> 
> 
> 
> Many thanks! 
> 
> 
> - Eduard
> 
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

-- 
Dennis Kaarsemaker
http://www.kaarsemaker.net


From eduardbpy at gmail.com  Wed Feb 11 11:09:10 2015
From: eduardbpy at gmail.com (Eduard Bondarenko)
Date: Wed, 11 Feb 2015 12:09:10 +0200
Subject: [Python-ideas] Add 'use warnings' directive, like in Perl
In-Reply-To: <1423645391.3477.5.camel@kaarsemaker.net>
References: <CAGJTDyus2Nua4U2NANjaoW_gfvHyChR7aDSs_7gZoUP=Mr0ffQ@mail.gmail.com>
 <1423645391.3477.5.camel@kaarsemaker.net>
Message-ID: <CAGJTDyuk7GgoFBVpYkxPMOzWyn4LU=-z2rWeVdbsdmyhQ3k1YA@mail.gmail.com>

Hello Dennis,

I know that, actually I mean the simple version of Perl's 'use warnings'
directive. I do not mean to show warnings like " "isn't numeric" " in the
code below or something like this.

my $a = "2:" + 3;

And roughly speaking, I mentioned Perl's directive to more clearly show
what I mean.

About your reference.
If I correctly understood provided material this is runtime warnings and
absolutely not the same what I mean.

2015-02-11 11:03 GMT+02:00 Dennis Kaarsemaker <dennis at kaarsemaker.net>:

> Hi Eduard,
>
> What you describe is not at all equivalent to 'use warnings' in perl.
> 'use warnings' is a directive to enable/disable all or certain warnings
> perl code may emit. The equivalent of this exists in python:
>
> https://docs.python.org/3.1/library/warnings.html
>
> On di, 2015-02-10 at 18:04 +0200, Eduard Bondarenko wrote:
> > Hello group,
> >
> > my name is Eduard. I came into the Python community with an idea to
> > add (implicitly or explicitly) 'use warnings' directive into the
> > python's programs.
> >
> > I think that everyone who worked with Perl understand what I am
> > talking about. For the rest I will explain the idea.
> >
> > Actually, this is my first experience for writing into the community
> > like this, so excuse me if you found some mistakes or oddities. Also I
> > do not know whether you are already talk about this topic..in any case
> > - sorry.
> >
> > So, imagine that you have a program:
> >
> > #!/usr/bin/python
> >
> > word = raw_input("Enter line : ")
> >
> > if word == "hello":
> >   print ("You wrote \'hello\'")
> > else:
> >   if world == "buy": #Error! should be word not world
> >     print "Buy"
> > else:
> >   iamnotfunction #Also error
> >
> > This script contains two errors. And in both cases we will know about
> > it at runtime. And the most worst thing is that you will not know
> > about these errors until someone enters anything other than the
> > "hello" word..
> >
> > Try and except blocks do not solve this problem. Within this approach
> > we also receive problem at runtime.
> >
> > What I propose ? I propose to add 'use warnings' directive. This
> > directive will provide deeply verification. Like this:
> >
> > #!/usr/bin/python
> >
> > use warnings
> >
> > word = raw_input("Enter line : ")
> >
> > if word == "hello":
> >   print ("You wrote \'hello\'")
> > else:
> >   if world == "buy": #Error! should be word not world
> >     print "Buy"
> > else:
> >   iamnotfunction #Also error
> >
> > Output:
> > Use of uninitialized value world in  eq (==) at test.py line ..
> > Useless use of a constant (iamnotfunction) in void context at test.py
> > line ..
> >
> > The user will see the output like this and the program will not start.
> >
> >
> > To my mind the good idea is to explicitly set this directive. If
> > developer does not want spend time for this checking, he can omit 'use
> > warning' directive. Also it will not corrupt the existing programs.
> > And developers will have a chance to add this line gradually.
> >
> >
> >
> > Many thanks!
> >
> >
> > - Eduard
> >
> > _______________________________________________
> > Python-ideas mailing list
> > Python-ideas at python.org
> > https://mail.python.org/mailman/listinfo/python-ideas
> > Code of Conduct: http://python.org/psf/codeofconduct/
>
> --
> Dennis Kaarsemaker
> http://www.kaarsemaker.net
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150211/3e992687/attachment.html>

From chris.barker at noaa.gov  Wed Feb 11 06:15:45 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Tue, 10 Feb 2015 21:15:45 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CAEbHw4bJOz1w82=Nt3ZU=3o4h-fx9FKzE+dfS4ng=N2-rNLoKQ@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
 <CAEbHw4bJOz1w82=Nt3ZU=3o4h-fx9FKzE+dfS4ng=N2-rNLoKQ@mail.gmail.com>
Message-ID: <CALGmxE+E0O5y4J-NqEQomt-THEVWoPcnJSXrRXJrn-sN6XQKRA@mail.gmail.com>

OK,

If I kept track correctly, we didn't get any -1s on the whole concept --
thanks!

I went through all the comments and adjusted or clarified the PEP as best I
could. The new version is in gitHub, along with a draft implementation and
tests:

https://github.com/PythonCHB/close_pep

and should be up on the official site soon.

The draft code is in isclose.py and tests in test_isclose.py -- the other
files have more stuff in them fro experimenting with alternate methods.

As far as implementation is concerned, I'm not sure what to do with
Decimal: simply cast to floats? or do type-based dispatch? ( Decimal won't
auto-convert a float to decimal, so it doesn't "just work" with the default
float tolerance. But that's an implementation detail.

So what's the next step?

-Chris

Here's the latest version of the PEP

PEP: 485
Title: A Function for testing approximate equality
Version: $Revision$
Last-Modified: $Date$
Author: Christopher Barker <Chris.Barker at noaa.gov>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 20-Jan-2015
Python-Version: 3.5
Post-History:


Abstract
========

This PEP proposes the addition of a function to the standard library
that determines whether one value is approximately equal or "close" to
another value.


Rationale
=========

Floating point values contain limited precision, which results in
their being unable to exactly represent some values, and for errors to
accumulate with repeated computation.  As a result, it is common
advice to only use an equality comparison in very specific situations.
Often a inequality comparison fits the bill, but there are times
(often in testing) where the programmer wants to determine whether a
computed value is "close" to an expected value, without requiring them
to be exactly equal. This is common enough, particularly in testing,
and not always obvious how to do it, so it would be useful addition to
the standard library.


Existing Implementations
------------------------

The standard library includes the ``unittest.TestCase.assertAlmostEqual``
method, but it:

* Is buried in the unittest.TestCase class

* Is an assertion, so you can't use it as a general test at the command
  line, etc. (easily)

* Is an absolute difference test. Often the measure of difference
  requires, particularly for floating point numbers, a relative error,
  i.e. "Are these two values within x% of each-other?", rather than an
  absolute error. Particularly when the magnitude of the values is
  unknown a priori.

The numpy package has the ``allclose()`` and ``isclose()`` functions,
but they are only available with numpy.

The statistics package tests include an implementation, used for its
unit tests.

One can also find discussion and sample implementations on Stack
Overflow and other help sites.

Many other non-python systems provide such a test, including the Boost C++
library and the APL language [4]_.

These existing implementations indicate that this is a common need and
not trivial to write oneself, making it a candidate for the standard
library.


Proposed Implementation
=======================

NOTE: this PEP is the result of extended discussions on the
python-ideas list [1]_.

The new function will go into the math module, and have the following
signature::

  isclose(a, b, rel_tol=1e-9, abs_tol=0.0)

``a`` and ``b``: are the two values to be tested to relative closeness

``rel_tol``: is the relative tolerance -- it is the amount of error
allowed, relative to the larger absolute value of a or b. For example,
to set a tolerance of 5%, pass tol=0.05. The default tolerance is 1e-9,
which assures that the two values are the same within about 9 decimal
digits. ``rel_tol`` must be greater than 0.0

``abs_tol``: is an minimum absolute tolerance level -- useful for
comparisons near zero.

Modulo error checking, etc, the function will return the result of::

  abs(a-b) <= max( rel_tol * max(abs(a), abs(b)), abs_tol )

The name, ``isclose``, is selected for consistency with the existing
``isnan`` and ``isinf``.

Handling of non-finite numbers
------------------------------

The IEEE 754 special values of NaN, inf, and -inf will be handled
according to IEEE rules. Specifically, NaN is not considered close to
any other value, including NaN. inf and -inf are only considered close
to themselves.


Non-float types
---------------

The primary use-case is expected to be floating point numbers.
However, users may want to compare other numeric types similarly. In
theory, it should work for any type that supports ``abs()``,
multiplication, comparisons, and subtraction.  The code will be written
and tested to accommodate these types:

* ``Decimal``

* ``int``

* ``Fraction``

* ``complex``: for complex, the absolute value of the complex values
  will be used for scaling and comparison. If a complex tolerance is
  passed in, the absolute value will be used as the tolerance.


Behavior near zero
------------------

Relative comparison is problematic if either value is zero. By
definition, no value is small relative to zero. And computationally,
if either value is zero, the difference is the absolute value of the
other value, and the computed absolute tolerance will be ``rel_tol``
times that value. When ``rel_tol`` is less than one, the difference will
never be less than the tolerance.

However, while mathematically correct, there are many use cases where
a user will need to know if a computed value is "close" to zero. This
calls for an absolute tolerance test. If the user needs to call this
function inside a loop or comprehension, where some, but not all, of
the expected values may be zero, it is important that both a relative
tolerance and absolute tolerance can be tested for with a single
function with a single set of parameters.

There is a similar issue if the two values to be compared straddle zero:
if a is approximately equal to -b, then a and b will never be computed
as "close".

To handle this case, an optional parameter, ``abs_tol`` can be
used to set a minimum tolerance used in the case of very small or zero
computed relative tolerance. That is, the values will be always be
considered close if the difference between them is less than
``abs_tol``

The default absolute tolerance value is set to zero because there is
no value that is appropriate for the general case. It is impossible to
know an appropriate value without knowing the likely values expected
for a given use case. If all the values tested are on order of one,
then a value of about 1e-9 might be appropriate, but that would be far
too large if expected values are on order of 1e-9 or smaller.

Any non-zero default might result in user's tests passing totally
inappropriately. If, on the other hand, a test against zero fails the
first time with defaults, a user will be prompted to select an
appropriate value for the problem at hand in order to get the test to
pass.

NOTE: that the author of this PEP has resolved to go back over many of
his tests that use the numpy ``allclose()`` function, which provides
a default absolute tolerance, and make sure that the default value is
appropriate.

If the user sets the rel_tol parameter to 0.0, then only the
absolute tolerance will effect the result. While not the goal of the
function, it does allow it to be used as a purely absolute tolerance
check as well.


Implementation
--------------

A sample implementation is available (as of Jan 22, 2015) on gitHub:

https://github.com/PythonCHB/close_pep/blob/master/is_close.py

This implementation has a flag that lets the user select which
relative tolerance test to apply -- this PEP does not suggest that
that be retained, but rather than the weak test be selected.

There are also drafts of this PEP and test code, etc. there:

https://github.com/PythonCHB/close_pep


Relative Difference
===================

There are essentially two ways to think about how close two numbers
are to each-other:

Absolute difference: simply ``abs(a-b)``

Relative difference: ``abs(a-b)/scale_factor`` [2]_.

The absolute difference is trivial enough that this proposal focuses
on the relative difference.

Usually, the scale factor is some function of the values under
consideration, for instance:

1) The absolute value of one of the input values

2) The maximum absolute value of the two

3) The minimum absolute value of the two.

4) The absolute value of the arithmetic mean of the two

These leads to the following possibilities for determining if two
values, a and b, are close to each other.

1) ``abs(a-b) <= tol*abs(a)``

2) ``abs(a-b) <= tol * max( abs(a), abs(b) )``

3) ``abs(a-b) <= tol * min( abs(a), abs(b) )``

4) ``abs(a-b) <= tol * (a + b)/2``

NOTE: (2) and (3) can also be written as:

2) ``(abs(a-b) <= abs(tol*a)) or (abs(a-b) <= abs(tol*b))``

3) ``(abs(a-b) <= abs(tol*a)) and (abs(a-b) <= abs(tol*b))``

(Boost refers to these as the "weak" and "strong" formulations [3]_)
These can be a tiny bit more computationally efficient, and thus are
used in the example code.

Each of these formulations can lead to slightly different results.
However, if the tolerance value is small, the differences are quite
small. In fact, often less than available floating point precision.

How much difference does it make?
---------------------------------

When selecting a method to determine closeness, one might want to know
how much  of a difference it could make to use one test or the other
-- i.e. how many values are there (or what range of values) that will
pass one test, but not the other.

The largest difference is between options (2) and (3) where the
allowable absolute difference is scaled by either the larger or
smaller of the values.

Define ``delta`` to be the difference between the allowable absolute
tolerance defined by the larger value and that defined by the smaller
value. That is, the amount that the two input values need to be
different in order to get a different result from the two tests.
``tol`` is the relative tolerance value.

Assume that ``a`` is the larger value and that both ``a`` and ``b``
are positive, to make the analysis a bit easier. ``delta`` is
therefore::

  delta = tol * (a-b)


or::

  delta / tol = (a-b)


The largest absolute difference that would pass the test: ``(a-b)``,
equals the tolerance times the larger value::

  (a-b) = tol * a


Substituting into the expression for delta::

  delta / tol = tol * a


so::

  delta = tol**2 * a


For example, for ``a = 10``, ``b = 9``, ``tol = 0.1`` (10%):

maximum tolerance ``tol * a == 0.1 * 10 == 1.0``

minimum tolerance ``tol * b == 0.1 * 9.0 == 0.9``

delta = ``(1.0 - 0.9) = 0.1`` or  ``tol**2 * a = 0.1**2 * 10 = .1``

The absolute difference between the maximum and minimum tolerance
tests in this case could be substantial. However, the primary use
case for the proposed function is testing the results of computations.
In that case a relative tolerance is likely to be selected of much
smaller magnitude.

For example, a relative tolerance of ``1e-8`` is about half the
precision available in a python float. In that case, the difference
between the two tests is ``1e-8**2 * a`` or ``1e-16 * a``, which is
close to the limit of precision of a python float. If the relative
tolerance is set to the proposed default of 1e-9 (or smaller), the
difference between the two tests will be lost to the limits of
precision of floating point. That is, each of the four methods will
yield exactly the same results for all values of a and b.

In addition, in common use, tolerances are defined to 1 significant
figure -- that is, 1e-9 is specifying about 9 decimal digits of
accuracy. So the difference between the various possible tests is well
below the precision to which the tolerance is specified.


Symmetry
--------

A relative comparison can be either symmetric or non-symmetric. For a
symmetric algorithm:

``isclose(a,b)`` is always the same as ``isclose(b,a)``

If a relative closeness test uses only one of the values (such as (1)
above), then the result is asymmetric, i.e. isclose(a,b) is not
necessarily the same as isclose(b,a).

Which approach is most appropriate depends on what question is being
asked. If the question is: "are these two numbers close to each
other?", there is no obvious ordering, and a symmetric test is most
appropriate.

However, if the question is: "Is the computed value within x% of this
known value?", then it is appropriate to scale the tolerance to the
known value, and an asymmetric test is most appropriate.

>From the previous section, it is clear that either approach would
yield the same or similar results in the common use cases. In that
case, the goal of this proposal is to provide a function that is least
likely to produce surprising results.

The symmetric approach provide an appealing consistency -- it
mirrors the symmetry of equality, and is less likely to confuse
people. A symmetric test also relieves the user of the need to think
about the order in which to set the arguments.  It was also pointed
out that there may be some cases where the order of evaluation may not
be well defined, for instance in the case of comparing a set of values
all against each other.

There may be cases when a user does need to know that a value is
within a particular range of a known value. In that case, it is easy
enough to simply write the test directly::

  if a-b <= tol*a:

(assuming a > b in this case). There is little need to provide a
function for this particular case.

This proposal uses a symmetric test.

Which symmetric test?
---------------------

There are three symmetric tests considered:

The case that uses the arithmetic mean of the two values requires that
the value be either added together before dividing by 2, which could
result in extra overflow to inf for very large numbers, or require
each value to be divided by two before being added together, which
could result in underflow to zero for very small numbers. This effect
would only occur at the very limit of float values, but it was decided
there was no benefit to the method worth reducing the range of
functionality or adding the complexity of checking values to determine
the order of computation.

This leaves the boost "weak" test (2)-- or using the smaller value to
scale the tolerance, or the Boost "strong" (3) test, which uses the
smaller of the values to scale the tolerance. For small tolerance,
they yield the same result, but this proposal uses the boost "weak"
test case: it is symmetric and provides a more useful result for very
large tolerances.

Large tolerances
----------------

The most common use case is expected to be small tolerances -- on order of
the
default 1e-9. However there may be use cases where a user wants to know if
two
fairly disparate values are within a particular range of each other: "is a
within 200% (rel_tol = 2.0) of b? In this case, the string test would never
indicate that two values are within that range of each other if one of them
is
zero. The strong case, however would use the larger (non-zero) value for the
test, and thus return true if one value is zero. For example: is 0 within
200%
of 10? 200% of ten is 20, so the range within 200% of ten is -10 to +30.
Zero
falls within that range, so it will return True.

Defaults
========

Default values are required for the relative and absolute tolerance.

Relative Tolerance Default
--------------------------

The relative tolerance required for two values to be considered
"close" is entirely use-case dependent. Nevertheless, the relative
tolerance needs to be greater than 1e-16 (approximate precision of a
python float). The value of 1e-9 was selected because it is the
largest relative tolerance for which the various possible methods will
yield the same result, and it is also about half of the precision
available to a python float. In the general case, a good numerical
algorithm is not expected to lose more than about half of available
digits of accuracy, and if a much larger tolerance is acceptable, the
user should be considering the proper value in that case. Thus 1-e9 is
expected to "just work" for many cases.

Absolute tolerance default
--------------------------

The absolute tolerance value will be used primarily for comparing to
zero. The absolute tolerance required to determine if a value is
"close" to zero is entirely use-case dependent. There is also
essentially no bounds to the useful range -- expected values would
conceivably be anywhere within the limits of a python float.  Thus a
default of 0.0 is selected.

If, for a given use case, a user needs to compare to zero, the test
will be guaranteed to fail the first time, and the user can select an
appropriate value.

It was suggested that comparing to zero is, in fact, a common use case
(evidence suggest that the numpy functions are often used with zero).
In this case, it would be desirable to have a "useful" default. Values
around 1-e8 were suggested, being about half of floating point
precision for values of around value 1.

However, to quote The Zen: "In the face of ambiguity, refuse the
temptation to guess." Guessing that users will most often be concerned
with values close to 1.0 would lead to spurious passing tests when used
with smaller values -- this is potentially more damaging than
requiring the user to thoughtfully select an appropriate value.


Expected Uses
=============

The primary expected use case is various forms of testing -- "are the
results computed near what I expect as a result?" This sort of test
may or may not be part of a formal unit testing suite. Such testing
could be used one-off at the command line, in an iPython notebook,
part of doctests, or simple asserts in an ``if __name__ == "__main__"``
block.

It would also be an appropriate function to use for the termination
criteria for a simple iterative solution to an implicit function::

    guess = something
    while True:
        new_guess = implicit_function(guess, *args)
        if isclose(new_guess, guess):
            break
        guess = new_guess


Inappropriate uses
------------------

One use case for floating point comparison is testing the accuracy of
a numerical algorithm. However, in this case, the numerical analyst
ideally would be doing careful error propagation analysis, and should
understand exactly what to test for. It is also likely that ULP (Unit
in the Last Place) comparison may be called for. While this function
may prove useful in such situations, It is not intended to be used in
that way.


Other Approaches
================

``unittest.TestCase.assertAlmostEqual``
---------------------------------------

(
https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertAlmostEqual
)

Tests that values are approximately (or not approximately) equal by
computing the difference, rounding to the given number of decimal
places (default 7), and comparing to zero.

This method is purely an absolute tolerance test, and does not address
the need for a relative tolerance test.

numpy ``isclose()``
-------------------

http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.isclose.html

The numpy package provides the vectorized functions isclose() and
allclose(), for similar use cases as this proposal:

``isclose(a, b, rtol=1e-05, atol=1e-08, equal_nan=False)``

      Returns a boolean array where two arrays are element-wise equal
      within a tolerance.

      The tolerance values are positive, typically very small numbers.
      The relative difference (rtol * abs(b)) and the absolute
      difference atol are added together to compare against the
      absolute difference between a and b

In this approach, the absolute and relative tolerance are added
together, rather than the ``or`` method used in this proposal. This is
computationally more simple, and if relative tolerance is larger than
the absolute tolerance, then the addition will have no effect. However,
if the absolute and relative tolerances are of similar magnitude, then
the allowed difference will be about twice as large as expected.

This makes the function harder to understand, with no computational
advantage in this context.

Even more critically, if the values passed in are small compared to
the absolute  tolerance, then the relative tolerance will be
completely swamped, perhaps unexpectedly.

This is why, in this proposal, the absolute tolerance defaults to zero
-- the user will be required to choose a value appropriate for the
values at hand.


Boost floating-point comparison
-------------------------------

The Boost project ( [3]_ ) provides a floating point comparison
function. It is a symmetric approach, with both "weak" (larger of the
two relative errors) and "strong" (smaller of the two relative errors)
options. This proposal uses the Boost "weak" approach. There is no
need to complicate the API by providing the option to select different
methods when the results will be similar in most cases, and the user
is unlikely to know which to select in any case.


Alternate Proposals
-------------------


A Recipe
'''''''''

The primary alternate proposal was to not provide a standard library
function at all, but rather, provide a recipe for users to refer to.
This would have the advantage that the recipe could provide and
explain the various options, and let the user select that which is
most appropriate. However, that would require anyone needing such a
test to, at the very least, copy the function into their code base,
and select the comparison method to use.


``zero_tol``
''''''''''''

One possibility was to provide a zero tolerance parameter, rather than
the absolute tolerance parameter. This would be an absolute tolerance
that would only be applied in the case of one of the arguments being
exactly zero. This would have the advantage of retaining the full
relative tolerance behavior for all non-zero values, while allowing
tests against zero to work. However, it would also result in the
potentially surprising result that a small value could be "close" to
zero, but not "close" to an even smaller value. e.g., 1e-10 is "close"
to zero, but not "close" to 1e-11.


No absolute tolerance
'''''''''''''''''''''

Given the issues with comparing to zero, another possibility would
have been to only provide a relative tolerance, and let comparison to
zero fail. In this case, the user would need to do a simple absolute
test: `abs(val) < zero_tol` in the case where the comparison involved
zero.

However, this would not allow the same call to be used for a sequence
of values, such as in a loop or comprehension. Making the function far
less useful. It is noted that the default abs_tol=0.0 achieves the
same effect if the default is not overridden.

Other tests
''''''''''''

The other tests considered are all discussed in the Relative Error
section above.


References
==========

.. [1] Python-ideas list discussion threads

   https://mail.python.org/pipermail/python-ideas/2015-January/030947.html

   https://mail.python.org/pipermail/python-ideas/2015-January/031124.html

   https://mail.python.org/pipermail/python-ideas/2015-January/031313.html

.. [2] Wikipedia page on relative difference

   http://en.wikipedia.org/wiki/Relative_change_and_difference

.. [3] Boost project floating-point comparison algorithms


http://www.boost.org/doc/libs/1_35_0/libs/test/doc/components/test_tools/floating_point_comparison.html

.. [4] 1976. R. H. Lathwell. APL comparison tolerance. Proceedings of
       the eighth international conference on APL Pages 255 - 258

    http://dl.acm.org/citation.cfm?doid=800114.803685

.. Bruce Dawson's discussion of floating point.


https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/


Copyright
=========

This document has been placed in the public domain.


..
   Local Variables:
   mode: indented-text
   indent-tabs-mode: nil
   sentence-end-double-space: t
   fill-column: 70
   coding: utf-8


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150210/e6b547a6/attachment-0001.html>

From donald at stufft.io  Wed Feb 11 13:27:21 2015
From: donald at stufft.io (Donald Stufft)
Date: Wed, 11 Feb 2015 07:27:21 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
Message-ID: <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>


> On Feb 11, 2015, at 2:21 AM, Ian Lee <ianlee1521 at gmail.com> wrote:
> 
> I mentioned this on the python-dev list [1] originally as a +1 to someone else suggesting the idea [2]. It also came up in a response to my post that I can't seem to find in the archives, so I've quoted it below [3].
> 
> As the subject says, the idea would be to add a "+" and "+=" operator to dict that would provide the following behavior:
> 
> >>> {'x': 1, 'y': 2} + {'z': 3}
> {'x': 1, 'y': 2, 'z': 3}
> 
> With the only potentially non obvious case I can see then is when there are duplicate keys, in which case the syntax could just be defined that last setter wins, e.g.:
> 
> >>> {'x': 1, 'y': 2} + {'x': 3}
> {'x': 3, 'y': 2}
> 
> Which is analogous to the example:
> 
> >>> new_dict = dict1.copy()
> >>> new_dict.update(dict2)
> 
> With "+=" then essentially ending up being an alias for ``dict.update(...)``.
> 
> I'd be happy to champion this as a PEP if the feedback / public opinion heads in that direction. 
> 
> 
> [1] https://mail.python.org/pipermail/python-dev/2015-February/138150.html <https://mail.python.org/pipermail/python-dev/2015-February/138150.html>
> [2] https://mail.python.org/pipermail/python-dev/2015-February/138116.html <https://mail.python.org/pipermail/python-dev/2015-February/138116.html>
> [3] John Wong -- 
> Well looking at just list
> a + b yields new list
> a += b yields modified a
> then there is also .extend in list. etc.  
> so do we want to follow list's footstep? I like + because + is more natural to read. Maybe this needs to be a separate thread. I am actually amazed to remember dict + dict is not possible... there must be a reason (performance??) for this...
> 

I?d really like this change and I think that it makes sense. The only thing I?d change is that I think the | operator makes more sense than +. dicts are more like sets than they are like lists so a union operator makes more sense I think.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150211/41e9a350/attachment.html>

From abarnert at yahoo.com  Wed Feb 11 14:32:16 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Wed, 11 Feb 2015 05:32:16 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxE+E0O5y4J-NqEQomt-THEVWoPcnJSXrRXJrn-sN6XQKRA@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
 <CAEbHw4bJOz1w82=Nt3ZU=3o4h-fx9FKzE+dfS4ng=N2-rNLoKQ@mail.gmail.com>
 <CALGmxE+E0O5y4J-NqEQomt-THEVWoPcnJSXrRXJrn-sN6XQKRA@mail.gmail.com>
Message-ID: <DFCFD59E-0C75-4B26-B31C-D887570D2368@yahoo.com>

On Feb 10, 2015, at 21:15, Chris Barker <chris.barker at noaa.gov> wrote:

> As far as implementation is concerned, I'm not sure what to do with Decimal: simply cast to floats? or do type-based dispatch? ( Decimal won't auto-convert a float to decimal, so it doesn't "just work" with the default float tolerance. But that's an implementation detail.

One more possibility:

Many of the functions in math don't work on Decimal. And Decimal has its own methods and functions for almost all of them. So you could add Decimal.isclose, and make math.isclose with Decimal values be a TypeError. And considering that a Decimal can handle ridiculous things like 1e-9999 as a relative tolerance, maybe that's even a good thing, not an unfortunate fallback?

(I'll read the rest of it later, but even if you didn't address a single one of my comments, I'm still probably at least +0.5.)

From abarnert at yahoo.com  Wed Feb 11 14:44:17 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Wed, 11 Feb 2015 05:44:17 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
Message-ID: <E2A96134-45B7-4384-91B6-084A287FDBD7@yahoo.com>

On Feb 10, 2015, at 23:21, Ian Lee <ianlee1521 at gmail.com> wrote:

> I mentioned this on the python-dev list [1] originally as a +1 to someone else suggesting the idea [2]. It also came up in a response to my post that I can't seem to find in the archives, so I've quoted it below [3].
> 
> As the subject says, the idea would be to add a "+" and "+=" operator to dict

There have been other suggestions in the past for a non-mutating equivalent to dict.update, whether it's spelled + or | or a method. I personally think the idea makes sense, but it's probably worth doing a bit of digging to see why those past suggestions failed. (If you're lucky, everyone likes the idea but nobody got around to implementing it, so all you have to do is write a patch and link to the previous discussion. If you're unlucky, someone made a good case that it's an attractive nuisance or a confusing API of something that you won't have an answer for--but at least then you'll know what you need to answer.)

> that would provide the following behavior:
> 
> >>> {'x': 1, 'y': 2} + {'z': 3}
> {'x': 1, 'y': 2, 'z': 3}
> 
> With the only potentially non obvious case I can see then is when there are duplicate keys, in which case the syntax could just be defined that last setter wins,

There's also the issue of what type you get when you add two Mappings of different types. And whether there should be an __radd__ that makes sure a legacy Mapping type plus a dict works. And whether the operator should be added to the Mapping ABC or just to the concrete class (and with a default implementation?). And whether it's worth writing a dict subclass that adds this method and putting it on PyPI as a backport (people writing 3.3+ code or 2.7/3.5 code can then just "from dict35 import dict35 as dict", but of course they still won't be able to add two dicts constructed from literals).

> e.g.:
> 
> >>> {'x': 1, 'y': 2} + {'x': 3}
> {'x': 3, 'y': 2}
> 
> Which is analogous to the example:
> 
> >>> new_dict = dict1.copy()
> >>> new_dict.update(dict2)
> 
> With "+=" then essentially ending up being an alias for ``dict.update(...)``.
> 
> I'd be happy to champion this as a PEP if the feedback / public opinion heads in that direction. 
> 
> 
> [1] https://mail.python.org/pipermail/python-dev/2015-February/138150.html
> [2] https://mail.python.org/pipermail/python-dev/2015-February/138116.html
> [3] John Wong -- 
>> Well looking at just list
>> a + b yields new list
>> a += b yields modified a
>> then there is also .extend in list. etc.  
>> so do we want to follow list's footstep? I like + because + is more natural to read. Maybe this needs to be a separate thread. I am actually amazed to remember dict + dict is not possible... there must be a reason (performance??) for this...
> 
> Cheers,
> 
> ~ Ian Lee
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150211/9d276da5/attachment-0001.html>

From j.wielicki at sotecware.net  Wed Feb 11 15:38:23 2015
From: j.wielicki at sotecware.net (Jonas Wielicki)
Date: Wed, 11 Feb 2015 15:38:23 +0100
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
Message-ID: <54DB695F.2080502@sotecware.net>

On 10.02.2015 22:01, Spencer Brown wrote:
> Basically, in any location where a variable name is provided for assignment (tuple unpacking, for loops, function parameters, etc) a variable name can be replaced by '...' to indicate that that value will not be used. This would be syntactically equivalent to del-ing the variable immediately after, except that the assignment never needs to be done.
> 
> Examples:
>>>> tup =  ('val1', 'val2', (123, 456, 789))
>>>> name, ..., (..., value, ...) = tup
> instead of:
>>>> name = tup[0]
>>>> value = tup[2][1]
> This shows slightly more clearly what format the tuple will be in, and provides some additional error-checking - the unpack will fail if the tuple is a different size or format.

I don?t like that syntax at all. With _ it is much more readable. In
that context, ... rather looks like what *_ would do: Leaving out many
elements (even though that is not possible in python currently, at least
not in (*_, foo, *_); that only makes things worse).

This may be arbitrarily confusing. As others have said, just use

>>> tup =  ('val1', 'val2', (123, 456, 789))
>>> name, _, (_, value, _) = tup



regards,
jwi



From dholth at gmail.com  Wed Feb 11 16:20:26 2015
From: dholth at gmail.com (Daniel Holth)
Date: Wed, 11 Feb 2015 10:20:26 -0500
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <54DB695F.2080502@sotecware.net>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <54DB695F.2080502@sotecware.net>
Message-ID: <CAG8k2+4o53qVbOzo4XmdchonFBTEfcWamByWfZ9bao0=XiHauw@mail.gmail.com>

How about zero characters between commas? In ES6 destructing
assignment you'd writes something like ,,c = (1,2,3)

On Wed, Feb 11, 2015 at 9:38 AM, Jonas Wielicki
<j.wielicki at sotecware.net> wrote:
> On 10.02.2015 22:01, Spencer Brown wrote:
>> Basically, in any location where a variable name is provided for assignment (tuple unpacking, for loops, function parameters, etc) a variable name can be replaced by '...' to indicate that that value will not be used. This would be syntactically equivalent to del-ing the variable immediately after, except that the assignment never needs to be done.
>>
>> Examples:
>>>>> tup =  ('val1', 'val2', (123, 456, 789))
>>>>> name, ..., (..., value, ...) = tup
>> instead of:
>>>>> name = tup[0]
>>>>> value = tup[2][1]
>> This shows slightly more clearly what format the tuple will be in, and provides some additional error-checking - the unpack will fail if the tuple is a different size or format.
>
> I don?t like that syntax at all. With _ it is much more readable. In
> that context, ... rather looks like what *_ would do: Leaving out many
> elements (even though that is not possible in python currently, at least
> not in (*_, foo, *_); that only makes things worse).
>
> This may be arbitrarily confusing. As others have said, just use
>
>>>> tup =  ('val1', 'val2', (123, 456, 789))
>>>> name, _, (_, value, _) = tup
>
>
>
> regards,
> jwi
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From ron3200 at gmail.com  Wed Feb 11 16:27:11 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Wed, 11 Feb 2015 09:27:11 -0600
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxE+E0O5y4J-NqEQomt-THEVWoPcnJSXrRXJrn-sN6XQKRA@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
 <CAEbHw4bJOz1w82=Nt3ZU=3o4h-fx9FKzE+dfS4ng=N2-rNLoKQ@mail.gmail.com>
 <CALGmxE+E0O5y4J-NqEQomt-THEVWoPcnJSXrRXJrn-sN6XQKRA@mail.gmail.com>
Message-ID: <mbfscg$fdp$1@ger.gmane.org>



On 02/10/2015 11:15 PM, Chris Barker wrote:
> Large tolerances
> ----------------
>
> The most common use case is expected to be small tolerances -- on order of the
> default 1e-9. However there may be use cases where a user wants to know if two
> fairly disparate values are within a particular range of each other: "is a
> within 200% (rel_tol = 2.0) of b? In this case, the string test would never

string -> strong     "the strong test would never"

> indicate that two values are within that range of each other if one of them is
> zero. The strong case, however would use the larger (non-zero) value for the

strong -> weak     "The weak case, ..."

> test, and thus return true if one value is zero. For example: is 0 within 200%
> of 10? 200% of ten is 20, so the range within 200% of ten is -10 to +30. Zero
> falls within that range, so it will return True.


-Ron


From solipsis at pitrou.net  Wed Feb 11 16:46:15 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 11 Feb 2015 16:46:15 +0100
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
Message-ID: <20150211164615.1fe33cab@fsol>

On Tue, 10 Feb 2015 13:23:58 -0800
Chris Barker <chris.barker at noaa.gov> wrote:
> On Tue, Feb 10, 2015 at 1:01 PM, Spencer Brown <spencerb21 at live.com> wrote:
> 
> > Basically, in any location where a variable name is provided for
> > assignment (tuple unpacking, for loops, function parameters, etc) a
> > variable name can be replaced by '...' to indicate that that value will not
> > be used. This would be syntactically equivalent to del-ing the variable
> > immediately after, except that the assignment never needs to be done.
> >
> > Examples:
> > >>> tup =  ('val1', 'val2', (123, 456, 789))
> > >>> name, ..., (..., value, ...) = tup
> >
> 
> I generally handle this by using a really simple no-meaning name, like"_" :
> 
> name, _, (_, value, __ ) = tup
> 
> granted, the _ sticks around and keeps the object alive, which may cause
> issues. But is it often a big deal?

The actual deal with that is when you are using gettext or a
similar I18N framework :-)

Regards

Antoine.



From chris.barker at noaa.gov  Wed Feb 11 17:27:39 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Wed, 11 Feb 2015 08:27:39 -0800
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <20150211164615.1fe33cab@fsol>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
Message-ID: <CALGmxEKztWD0RcQzx-7qR-y5La2arxykeVm3P3h7wjWsS3zTAw@mail.gmail.com>

On Wed, Feb 11, 2015 at 7:46 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:

> > name, _, (_, value, __ ) = tup
> >
> > granted, the _ sticks around and keeps the object alive, which may cause
> > issues. But is it often a big deal?
>
> The actual deal with that is when you are using gettext or a
> similar I18N framework :-)
>

then use a different simple  name.

I sometimes use tmp, also.

-Chris



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150211/ce59a412/attachment.html>

From solipsis at pitrou.net  Wed Feb 11 17:30:06 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 11 Feb 2015 17:30:06 +0100
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CALGmxEKztWD0RcQzx-7qR-y5La2arxykeVm3P3h7wjWsS3zTAw@mail.gmail.com>
Message-ID: <20150211173006.24df9b90@fsol>

On Wed, 11 Feb 2015 08:27:39 -0800
Chris Barker <chris.barker at noaa.gov> wrote:
> On Wed, Feb 11, 2015 at 7:46 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> 
> > > name, _, (_, value, __ ) = tup
> > >
> > > granted, the _ sticks around and keeps the object alive, which may cause
> > > issues. But is it often a big deal?
> >
> > The actual deal with that is when you are using gettext or a
> > similar I18N framework :-)
> >
> 
> then use a different simple  name.

Well, thanks for stating the obvious :-)

Regards

Antoine.



From greg.ewing at canterbury.ac.nz  Wed Feb 11 21:06:55 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Thu, 12 Feb 2015 09:06:55 +1300
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <CAG8k2+4o53qVbOzo4XmdchonFBTEfcWamByWfZ9bao0=XiHauw@mail.gmail.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <54DB695F.2080502@sotecware.net>
 <CAG8k2+4o53qVbOzo4XmdchonFBTEfcWamByWfZ9bao0=XiHauw@mail.gmail.com>
Message-ID: <54DBB65F.1080108@canterbury.ac.nz>

Daniel Holth wrote:
> How about zero characters between commas? In ES6 destructing
> assignment you'd writes something like ,,c = (1,2,3)

Wouldn't play well with the syntax for 1-element tuples:

    c, = stuff

Is len(stuff) expected to be 1 or 2?

-- 
Greg

From ncoghlan at gmail.com  Thu Feb 12 00:01:22 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 12 Feb 2015 09:01:22 +1000
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <20150211164615.1fe33cab@fsol>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
Message-ID: <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>

On 12 February 2015 at 01:46, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Tue, 10 Feb 2015 13:23:58 -0800
> Chris Barker <chris.barker at noaa.gov> wrote:
>> On Tue, Feb 10, 2015 at 1:01 PM, Spencer Brown <spencerb21 at live.com> wrote:
>>
>> > Basically, in any location where a variable name is provided for
>> > assignment (tuple unpacking, for loops, function parameters, etc) a
>> > variable name can be replaced by '...' to indicate that that value will not
>> > be used. This would be syntactically equivalent to del-ing the variable
>> > immediately after, except that the assignment never needs to be done.
>> >
>> > Examples:
>> > >>> tup =  ('val1', 'val2', (123, 456, 789))
>> > >>> name, ..., (..., value, ...) = tup
>> >
>>
>> I generally handle this by using a really simple no-meaning name, like"_" :
>>
>> name, _, (_, value, __ ) = tup
>>
>> granted, the _ sticks around and keeps the object alive, which may cause
>> issues. But is it often a big deal?
>
> The actual deal with that is when you are using gettext or a
> similar I18N framework :-)

I switched my own dummy variable recommendation to a double-underscore
years ago for exactly that reason. It (very) mildly irritates me that
so many sources still recommend the single-underscore that clobbers
the conventional name for the gettext translation builtin :)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From ethan at stoneleaf.us  Thu Feb 12 00:06:03 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 11 Feb 2015 15:06:03 -0800
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
Message-ID: <54DBE05B.8020509@stoneleaf.us>

On 02/11/2015 03:01 PM, Nick Coghlan wrote:
> On 12 February 2015 at 01:46, Antoine Pitrou wrote:
>>
>> The actual deal with that is when you are using gettext or a
>> similar I18N framework :-)
> 
> I switched my own dummy variable recommendation to a double-underscore
> years ago for exactly that reason. It (very) mildly irritates me that
> so many sources still recommend the single-underscore that clobbers
> the conventional name for the gettext translation builtin :)

Python 3.4.3rc1+ (3.4:645f3d750be1, Feb 10 2015, 13:25:59)
[GCC 4.7.3] on linux
Type "help", "copyright", "credits" or "license" for more information.
--> _
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NameError: name '_' is not defined
--> import gettext
--> gettext._
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute '_'

What globalness are you talking about?

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150211/7b344e0e/attachment.sig>

From eric at trueblade.com  Thu Feb 12 00:13:15 2015
From: eric at trueblade.com (Eric V. Smith)
Date: Wed, 11 Feb 2015 18:13:15 -0500
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <54DBE05B.8020509@stoneleaf.us>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <54DBE05B.8020509@stoneleaf.us>
Message-ID: <54DBE20B.4000901@trueblade.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 2/11/2015 6:06 PM, Ethan Furman wrote:
> On 02/11/2015 03:01 PM, Nick Coghlan wrote:
>> On 12 February 2015 at 01:46, Antoine Pitrou wrote:
>>> 
>>> The actual deal with that is when you are using gettext or a 
>>> similar I18N framework :-)
>> 
>> I switched my own dummy variable recommendation to a
>> double-underscore years ago for exactly that reason. It (very)
>> mildly irritates me that so many sources still recommend the
>> single-underscore that clobbers the conventional name for the
>> gettext translation builtin :)
> 
> Python 3.4.3rc1+ (3.4:645f3d750be1, Feb 10 2015, 13:25:59) [GCC
> 4.7.3] on linux Type "help", "copyright", "credits" or "license"
> for more information. --> _ Traceback (most recent call last): File
> "<stdin>", line 1, in <module> NameError: name '_' is not defined 
> --> import gettext --> gettext._ Traceback (most recent call
> last): File "<stdin>", line 1, in <module> AttributeError: 'module'
> object has no attribute '_'
> 
> What globalness are you talking about?

https://docs.python.org/2/library/gettext.html#gettext.install

>>> import gettext gettext.install('foo') _
<bound method NullTranslations.gettext of <gettext.NullTranslations
instance at 0xffeb65cc>>
>>> 

- -- 
Eric.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJU2+ILAAoJENxauZFcKtNxmPgIAJbnDJoxNk00zAU27Wmdod9V
RucboE4CyuIPD6VA0x7f0Tk7l93gBaOe74ccUmaW4smpH/u/CDv5CbXWxBX748n8
88piSbCIlUQBgKVIbITgtjCOcFm96xJmDyJjApRcxwkB+QU289d/00Ef5kJ5ifLH
rM17e0OqZXdeBO+4AN0BIErftBhoA+vTsq476ubrWAIQA2+qLSNalUDhkiUpHJNn
tMkznWdEaquD4Ah1ScseActdCXyUkPDqhIzYQh05IKCfR1yahIL13RtsSyWMxBo2
ocZr+vrfQZGXzkyCHOAb2lAmxRcTaNyWzwbZZHocQKhMeZUMs5xBKNhXUeHmehM=
=cNcU
-----END PGP SIGNATURE-----

From eric at trueblade.com  Thu Feb 12 00:15:23 2015
From: eric at trueblade.com (Eric V. Smith)
Date: Wed, 11 Feb 2015 18:15:23 -0500
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <54DBE20B.4000901@trueblade.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <54DBE05B.8020509@stoneleaf.us> <54DBE20B.4000901@trueblade.com>
Message-ID: <54DBE28B.2020406@trueblade.com>

On 2/11/2015 6:13 PM, Eric V. Smith wrote:
> On 2/11/2015 6:06 PM, Ethan Furman wrote:

>> What globalness are you talking about?
> 
> https://docs.python.org/2/library/gettext.html#gettext.install
> 
>>>> import gettext gettext.install('foo') _
> <bound method NullTranslations.gettext of <gettext.NullTranslations
> instance at 0xffeb65cc>>
>>>>

Thanks, Thunderbird, for mangling that for me. Hopefully this works better:

>>> import gettext
>>> gettext.install('foo')
>>> _
<bound method NullTranslations.gettext of gettext.NullTranslations
instance at 0xffeb65cc>>
>>>


-- 
Eric.

From ethan at stoneleaf.us  Thu Feb 12 00:21:03 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 11 Feb 2015 15:21:03 -0800
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <54DBE28B.2020406@trueblade.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <54DBE05B.8020509@stoneleaf.us> <54DBE20B.4000901@trueblade.com>
 <54DBE28B.2020406@trueblade.com>
Message-ID: <54DBE3DF.4050200@stoneleaf.us>

On 02/11/2015 03:15 PM, Eric V. Smith wrote:

> Thanks, Thunderbird, for mangling that for me. Hopefully this works better:
> 
>--> import gettext
>--> gettext.install('foo')
>--> _
> <bound method NullTranslations.gettext of gettext.NullTranslations
> instance at 0xffeb65cc>>

Oh, cool.  It is somewhat amusing to see the stdlib doing something we are usually strongly advised against -- adding
things to __bulitins__ ;) . (Yes, I agree it is a good case... still amusing.)

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150211/1300d8b9/attachment-0001.sig>

From ethan at stoneleaf.us  Thu Feb 12 01:38:39 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 11 Feb 2015 16:38:39 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
Message-ID: <54DBF60F.3000600@stoneleaf.us>

On 02/11/2015 04:27 AM, Donald Stufft wrote:
> 
> I?d really like this change and I think that it makes sense. The only thing I?d change is that I think the | operator
> makes more sense than +. dicts are more like sets than they are like lists so a union operator makes more sense I think.

Maybe I'm just not steeped enough in CS, but when I want to combine two things together, my first reflex is always '+'.

I suppose I could come around to '|', though -- it does ease the tension around the behavior of duplicate keys.

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150211/757f65d4/attachment.sig>

From skip.montanaro at gmail.com  Thu Feb 12 04:24:58 2015
From: skip.montanaro at gmail.com (Skip Montanaro)
Date: Wed, 11 Feb 2015 21:24:58 -0600
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DBF60F.3000600@stoneleaf.us>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
Message-ID: <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>

Addition in the usual sense of the word wouldn't be commutative for
dictionaries. In particular, it's hard to see how you could define addition
so these two expressions are equal:

{'a': 1} + {'a': 2}

{'a': 2} + {'a': 1}

'+=' is no problem.

Skip
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150211/9f281152/attachment.html>

From tjreedy at udel.edu  Thu Feb 12 04:33:15 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 11 Feb 2015 22:33:15 -0500
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
Message-ID: <mbh6u0$fr7$1@ger.gmane.org>

On 2/11/2015 6:01 PM, Nick Coghlan wrote:

> I switched my own dummy variable recommendation to a double-underscore
> years ago for exactly that reason. It (very) mildly irritates me that
> so many sources still recommend the single-underscore that clobbers
> the conventional name for the gettext translation builtin :)

People are just copying python itself, which defines _ as a throwaway 
name.  I suspect it did so before gettext.

By reusing _, gettext made interactive experimentation tricky.

 >>> import gettext
 >>> gettext.install('foo')
 >>> _
<bound method NullTranslations.gettext of <gettext.NullTranslations 
object at 0x000000000361E978>>
 >>> _('abc')
'abc'
 >>> _
'abc'  # whoops

A possible solution would be to give 3.x str a __pos__ or __neg__ 
method, so +'python' or -'python' would mean the translation of 'python' 
(the snake).  This would be even *less* intrusive and easier to write 
than _('python').

Since () has equal precedence with . while + and - are lower, we would 
have the following equivalences for mixed expressions with '.':

_(snake.format()) == -snake.format()  # format, then translate
                # or -(snake.lower())
_(snake).format() == (-snake).format()  # translate, then format

-- 
Terry Jan Reedy


From ben+python at benfinney.id.au  Thu Feb 12 04:40:45 2015
From: ben+python at benfinney.id.au (Ben Finney)
Date: Thu, 12 Feb 2015 14:40:45 +1100
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org>
Message-ID: <85y4o3dcg2.fsf@benfinney.id.au>

Terry Reedy <tjreedy at udel.edu> writes:

> People are just copying python itself, which defines _ as a throwaway
> name. I suspect it did so before gettext.
>
> By reusing _, gettext made interactive experimentation tricky.

Gettext has been using the name ?_? since 1995, which was in Python's
infancy. The name ?_? is pretty obvious, I'm sure it was in use before
Python came along.

So, rather than the point-the-finger phrasing you give above, I'd say
instead:

By choosing the name ?_?, both Gettext and Python (and any of numerous
other systems which chose that name) made a name collision likely.

-- 
 \      ?Every man would like to be God, if it were possible; some few |
  `\          find it difficult to admit the impossibility.? ?Bertrand |
_o__)                    Russell, _Power: A New Social Analysis_, 1938 |
Ben Finney


From rosuav at gmail.com  Thu Feb 12 04:59:28 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Thu, 12 Feb 2015 14:59:28 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
Message-ID: <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>

On Thu, Feb 12, 2015 at 2:24 PM, Skip Montanaro
<skip.montanaro at gmail.com> wrote:
> Addition in the usual sense of the word wouldn't be commutative for
> dictionaries. In particular, it's hard to see how you could define addition
> so these two expressions are equal:
>
> {'a': 1} + {'a': 2}
>
> {'a': 2} + {'a': 1}
>
> '+=' is no problem.

Does it have to be? It isn't commutative for strings or tuples either.
Addition of complex objects does at times depend on order (though as
we saw in another thread, it can be very confusing if the _type_ can
change if you switch the operands), so I would have no problem with
the right-hand operand "winning" when there's a key collision.

ChrisA

From greg.ewing at canterbury.ac.nz  Thu Feb 12 05:06:33 2015
From: greg.ewing at canterbury.ac.nz (Greg)
Date: Thu, 12 Feb 2015 17:06:33 +1300
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
Message-ID: <54DC26C9.4060801@canterbury.ac.nz>

On 12/02/2015 4:59 p.m., Chris Angelico wrote:
>> Addition in the usual sense of the word wouldn't be commutative for
>> dictionaries.
>
> Does it have to be? It isn't commutative for strings or tuples either.

I think associativity is the property in question, and it
does hold for string and tuple concatenation.

Dict addition could be made associative by raising an
exception on duplicate keys.

Another way would be to define {'a':x} + {'a':y} as
{'a': x + y}, but that would probably upset a lot of
people. :-)

-- 
Greg


From schesis at gmail.com  Thu Feb 12 05:23:44 2015
From: schesis at gmail.com (Zero Piraeus)
Date: Thu, 12 Feb 2015 01:23:44 -0300
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <mbh6u0$fr7$1@ger.gmane.org>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org>
Message-ID: <20150212042344.GA5228@piedra>

:

On Wed, Feb 11, 2015 at 10:33:15PM -0500, Terry Reedy wrote:
> 
> A possible solution would be to give 3.x str a __pos__ or __neg__
> method, so +'python' or -'python' would mean the translation of
> 'python' (the snake).  This would be even *less* intrusive and
> easier to write than _('python').

~'python' is a nicer visual representation of translation IMO - and
__invert__ could be retconned to mean INternationalisation-conVERT in
the case of strings ;-)

 -[]z.

-- 
Zero Piraeus: ad referendum
http://etiol.net/pubkey.asc

From abarnert at yahoo.com  Thu Feb 12 05:58:52 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Wed, 11 Feb 2015 20:58:52 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DC26C9.4060801@canterbury.ac.nz>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC26C9.4060801@canterbury.ac.nz>
Message-ID: <AC45F2D3-2507-4D5A-832B-83124FD98FC9@yahoo.com>

On Feb 11, 2015, at 20:06, Greg <greg.ewing at canterbury.ac.nz> wrote:

> On 12/02/2015 4:59 p.m., Chris Angelico wrote:
>>> Addition in the usual sense of the word wouldn't be commutative for
>>> dictionaries.
>> 
>> Does it have to be? It isn't commutative for strings or tuples either.
> 
> I think associativity is the property in question, and it
> does hold for string and tuple concatenation.

No, I think commutativity is the property in question, because dict merging with the right side winning is already associative but not commutative--exactly like string and tuple addition.

Commutative means ab = ba. Associative means a(bc) = (ab)c.

Also, using addition for noncommutative operations is not some anti-mathematical perversion by Python; mathematicians (and, I believe, physicists even more so) generally use the same symbols and names for addition and multiplication of noncommutative algebras like the quaternions (and even nonassociative, like the octonions) that they use for the commutative reals and complexes. In fact, the idea of noncommutative addition or multiplication is kind of the starting point of 19th century algebra and everything that follows from it.

> Dict addition could be made associative by raising an
> exception on duplicate keys.

That would make it commutative.

> Another way would be to define {'a':x} + {'a':y} as
> {'a': x + y}, but that would probably upset a lot of
> people. :-)

Of course that's exactly what Counter does. And it also defines | to give {'a': max(x, y)}. Which is why it's important to consider the __radd__/__ror__ issue. What should happen if you add a dict to a Counter, or vice-versa?

From Nikolaus at rath.org  Thu Feb 12 06:10:19 2015
From: Nikolaus at rath.org (Nikolaus Rath)
Date: Wed, 11 Feb 2015 21:10:19 -0800
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <54DBE3DF.4050200@stoneleaf.us> (Ethan Furman's message of "Wed, 
 11 Feb 2015 15:21:03 -0800")
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <54DBE05B.8020509@stoneleaf.us> <54DBE20B.4000901@trueblade.com>
 <54DBE28B.2020406@trueblade.com> <54DBE3DF.4050200@stoneleaf.us>
Message-ID: <87a90ju344.fsf@vostro.rath.org>

On Feb 11 2015, Ethan Furman <ethan-gcWI5d7PMXnvaiG9KC9N7Q at public.gmane.org> wrote:
> On 02/11/2015 03:15 PM, Eric V. Smith wrote:
>
>> Thanks, Thunderbird, for mangling that for me. Hopefully this works better:
>> 
>>--> import gettext
>>--> gettext.install('foo')
>>--> _
>> <bound method NullTranslations.gettext of gettext.NullTranslations
>> instance at 0xffeb65cc>>
>
> Oh, cool.  It is somewhat amusing to see the stdlib doing something we
> are usually strongly advised against -- adding things to __bulitins__
> ;) . (Yes, I agree it is a good case... still amusing.)

Why is that a good case? What's wrong with e.g.
 _ = gettext.install('foo')
or
 from gettext import _
?

Thanks,
Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             ?Time flies like an arrow, fruit flies like a Banana.?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 997 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150211/2411f919/attachment.sig>

From mal at egenix.com  Thu Feb 12 09:51:22 2015
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 12 Feb 2015 09:51:22 +0100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>	<3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>	<54DBF60F.3000600@stoneleaf.us>	<CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
Message-ID: <54DC698A.2090909@egenix.com>

On 12.02.2015 04:59, Chris Angelico wrote:
> On Thu, Feb 12, 2015 at 2:24 PM, Skip Montanaro
> <skip.montanaro at gmail.com> wrote:
>> Addition in the usual sense of the word wouldn't be commutative for
>> dictionaries. In particular, it's hard to see how you could define addition
>> so these two expressions are equal:
>>
>> {'a': 1} + {'a': 2}
>>
>> {'a': 2} + {'a': 1}
>>
>> '+=' is no problem.
> 
> Does it have to be? It isn't commutative for strings or tuples either.
> Addition of complex objects does at times depend on order (though as
> we saw in another thread, it can be very confusing if the _type_ can
> change if you switch the operands), so I would have no problem with
> the right-hand operand "winning" when there's a key collision.

Solving that is simple: you define key collisions as having an
undefined result. Then '+' for dicts is commutative.

However, I don't really see the point in having an operation that
takes two dictionaries, creates a new empty one and updates this
with both sides of the operand. It may be theoretically useful,
but it results in the same poor performance you have in string
concatenation.

In applications, you normally just need the update functionality
for dictionaries. If you do need a copy, you can create a copy
explicitly - but those cases are usually rare.

So +1 on the '+=' syntax, -1 on '+' for dicts.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Feb 12 2015)
>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>> mxODBC Plone/Zope Database Adapter ...       http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::::: Try our mxODBC.Connect Python Database Interface for free ! ::::::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From me at the-compiler.org  Thu Feb 12 09:59:40 2015
From: me at the-compiler.org (Florian Bruhin)
Date: Thu, 12 Feb 2015 09:59:40 +0100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DC698A.2090909@egenix.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com>
Message-ID: <20150212085939.GO24753@tonks>

* M.-A. Lemburg <mal at egenix.com> [2015-02-12 09:51:22 +0100]:
> However, I don't really see the point in having an operation that
> takes two dictionaries, creates a new empty one and updates this
> with both sides of the operand. It may be theoretically useful,
> but it results in the same poor performance you have in string
> concatenation.
> 
> In applications, you normally just need the update functionality
> for dictionaries. If you do need a copy, you can create a copy
> explicitly - but those cases are usually rare.
> 
> So +1 on the '+=' syntax, -1 on '+' for dicts.

I think it'd be rather confusing if += works but + does not.

Regarding duplicate keys, IMHO only two options make sense:

1) Raise an exception
2) The right-hand side overrides keys in the left-hand side.

I think #1 would prevent many useful cases where + could be used
instead of .update(), so +1 for #2.

Florian

-- 
http://www.the-compiler.org | me at the-compiler.org (Mail/XMPP)
   GPG: 916E B0C8 FD55 A072 | http://the-compiler.org/pubkey.asc
         I love long mails! | http://email.is-not-s.ms/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/9e04afd6/attachment.sig>

From mal at egenix.com  Thu Feb 12 10:00:00 2015
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 12 Feb 2015 10:00:00 +0100
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <mbh6u0$fr7$1@ger.gmane.org>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>	<CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>	<20150211164615.1fe33cab@fsol>	<CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org>
Message-ID: <54DC6B90.4090004@egenix.com>

On 12.02.2015 04:33, Terry Reedy wrote:
> A possible solution would be to give 3.x str a __pos__ or __neg__ method, so +'python' or -'python'
> would mean the translation of 'python' (the snake).  This would be even *less* intrusive and easier
> to write than _('python').

Doesn't read well:

label = +'I like' + +'Python'

and this is even worse, IMO:

label = -'I like' + -'Python'


I think the proper way of doing this would be to add a string
literal modifier like we have with r'' for raw strings,
e.g. i'Internationalized String'.

The Python compiler would then map this to a call to
sys.i18n like so:

i'Internationalized String' -> sys.i18n('Internationalized String')

and applications could set sys.i18n to whatever they use for
internationalization.

Python itself would default to this:

def i18n(x):
    return x

sys.i18n = i18n


The result looks much nicer:

label = i'I like' + i'Python'


-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Feb 12 2015)
>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>> mxODBC Plone/Zope Database Adapter ...       http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::::: Try our mxODBC.Connect Python Database Interface for free ! ::::::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From ianlee1521 at gmail.com  Thu Feb 12 10:07:52 2015
From: ianlee1521 at gmail.com (Ian Lee)
Date: Thu, 12 Feb 2015 01:07:52 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150212085939.GO24753@tonks>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
Message-ID: <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>

Alright, I've tried to gather up all of the feedback and organize it in
something approaching the alpha draft of a PEP might look like::


Proposed New Methods on dict
============================

Adds two dicts together, returning a new object with the type of the left
hand
operand. This would be roughly equivalent to calling:

    >>> new_dict = old_dict.copy();
    >>> new_dict.update(other_dict)

Where ``new_dict`` is the same type as ``old_dict``, but possibly not the
same
as ``other_dict``.

__add__
-------

    >>> foo = {'a': 1, 'b': 'xyz'}
    >>> bar = {'a': 5.0, 'c': object()}
    >>> baz = foo + bar
    >>> foo
    {'a': 1, 'b': 'xyz'}
    >>> bar
    {'a': 5.0, 'c': object()}
    >> baz
    {'a': 5.0, 'b': 'xyz', 'c': object()}

__iadd__
--------

    >>> foo = {'a': 1, 'b': 'xyz'}
    >>> bar = {'a': 5.0, 'c': object()}
    >>> foo += bar
    >>> foo
    {'a': 5.0, 'b': 'xyz', 'c': object()}
    >>> bar
    {'a': 5.0, 'c': object()}

__radd__
--------

The reverse add was mentioned in [2], [5]. I'm not sure I have the following
example exactly right, particularly the type. My initial thought is to have
it
come out as the type of the left hand operand. So, assuming LegacyMap has no
``__add__`` method:

    >>> foo = LegacyMap{'a': 1, 'b': 'xyz'}
    >>> bar = dict({'a': 5.0, 'c': object()})
    >>> baz = foo + bar  # e.g. bar.__radd__(foo)
    >>> baz
    {'a': 5.0, 'b': 'xyz', 'c': object()}
    >>> type(baz) == LegacyMap


Considerations
==============

Key Collisions
--------------

When there is a key collision, handle as currently by ``dict.update()``,
namely
that the right hand operand "wins".

Adding mappings of different types
----------------------------------

Here is what currently happens in a few cases in Python 3.4, given:

    >>> class A(dict): pass
    >>> class B(dict): pass
    >>> foo = A({'a': 1, 'b': 'xyz'})
    >>> bar = B({'a': 5.0, 'c': object()})

Currently (this surprised me actually... I guess it gets the parent class?):

    >>> baz = foo.copy()
    >>> type(baz)
    dict

Currently:

    >>> foo.update(bar)
    >>> foo
    >>> type(foo)
    A

Idea about '+' adding values
----------------------------

The idea of ``{'x': 1} + {'x': 2} == {'x': 3}`` was mentioned [3], [4] and
seems to
not have had a lot of support. In particular, this goes against the current
usage of the ``dict.update()`` method, where:

    >>> d = {'x': 1}
    >>> d.update({'x': 2})
    >>> d
    {'x': 2}

'+' vs '|' Operator
-------------------

If I was completely uninitiated, I would probably reach for the '+' first,
but
I don't feel this is the crux of the issue...

Backport to PyPI
----------------

> And whether it's worth writing a dict subclass that adds this method and
> putting it on PyPI as a backport (people writing 3.3+ code or 2.7/3.5 code
> can then just "from dict35 import dict35 as dict", but of course they
still
> won't be able to add two dicts constructed from literals).
-- Andrew Barnert [2]

Sure, I'd be willing to do this.

References
==========

[1] https://mail.python.org/pipermail/python-ideas/2014-July/028440.html
[2] https://mail.python.org/pipermail/python-ideas/2015-February/031755.html
[3] https://mail.python.org/pipermail/python-ideas/2015-February/031773.html
[4] https://mail.python.org/pipermail/python-ideas/2014-July/028425.html

Other Discussions
-----------------

As best I can, tell from reading through some of these threads there was
never
really a conclusion, and instead the discussions eventually fizzled out.

https://mail.python.org/pipermail/python-ideas/2014-July/028424.html
https://mail.python.org/pipermail/python-ideas/2011-December/013227.html
https://mail.python.org/pipermail/python-ideas/2013-June/021140.html



~ Ian Lee

On Thu, Feb 12, 2015 at 12:59 AM, Florian Bruhin <me at the-compiler.org>
wrote:

> * M.-A. Lemburg <mal at egenix.com> [2015-02-12 09:51:22 +0100]:
> > However, I don't really see the point in having an operation that
> > takes two dictionaries, creates a new empty one and updates this
> > with both sides of the operand. It may be theoretically useful,
> > but it results in the same poor performance you have in string
> > concatenation.
> >
> > In applications, you normally just need the update functionality
> > for dictionaries. If you do need a copy, you can create a copy
> > explicitly - but those cases are usually rare.
> >
> > So +1 on the '+=' syntax, -1 on '+' for dicts.
>
> I think it'd be rather confusing if += works but + does not.
>
> Regarding duplicate keys, IMHO only two options make sense:
>
> 1) Raise an exception
> 2) The right-hand side overrides keys in the left-hand side.
>
> I think #1 would prevent many useful cases where + could be used
> instead of .update(), so +1 for #2.
>
> Florian
>
> --
> http://www.the-compiler.org | me at the-compiler.org (Mail/XMPP)
>    GPG: 916E B0C8 FD55 A072 | http://the-compiler.org/pubkey.asc
>          I love long mails! | http://email.is-not-s.ms/
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/330c07ab/attachment-0001.html>

From lkb.teichmann at gmail.com  Thu Feb 12 11:09:24 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Thu, 12 Feb 2015 11:09:24 +0100
Subject: [Python-ideas] A (meta)class algebra
Message-ID: <CAK9R32Rvf-3mu_bd63wTmVJAaEb47_4+S++kuXR51a73=k7VLQ@mail.gmail.com>

Hi list,

metaclasses are a really cool thing in python. Unfortunately, it can
easily get out of hand if you use them too much, once one wants to
inherit from two classes with different metaclasses one has to go
some lengths to get things going again. On one hand this is
unavoidable, as python has no way to guess how two metaclasses
are to be mixed.

Often, however, two metaclasses are designed to work together.
But currently, there is no way to tell the metaclass machinery how
to mix them.

I propose to use the + operator to mix metaclasses. This means
one can simply overwrite __add__ to get a new metaclass mixing
behavior.

Let's look at how metaclass determination works right now. It is done
in typeobject.c. I'm giving here the python equivalent of the function
doing it:

def CalculateMetaclass(metatype, type):
    winner = metatype
    for b in type.__bases__:
        if issubclass(winner, b):
            continue
        elif issubclass(b, winner):
            winner = b
            continue
        else:
            raise TypeError("metaclass conflict")
    return winner

I would replace that with the following function, which uses addition
istead of issubclass:

def CalculateMetaclass(metatype, type):
    winner = metatype
    for b in type.__bases__:
         try:
             winner = winner + b
         except TypeError:
             raise TypeError("metaclass conflict")

Now the default implementation of __add__ in type would be
something like:

def __add__(self, other):
    if issubclass(self, other):
        return self
    return NotImplemented

def __radd__(self, other):
    if issubclass(self, other):
         return self
    return NotImplemented

This gives the default behavior for all metaclasses inheriting type.
They can be easily overwritten to get a different metaclass mixing
behavior.

Greetings

Martin

From p.f.moore at gmail.com  Thu Feb 12 11:19:47 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Thu, 12 Feb 2015 10:19:47 +0000
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <CAK9R32Rvf-3mu_bd63wTmVJAaEb47_4+S++kuXR51a73=k7VLQ@mail.gmail.com>
References: <CAK9R32Rvf-3mu_bd63wTmVJAaEb47_4+S++kuXR51a73=k7VLQ@mail.gmail.com>
Message-ID: <CACac1F94U3As4D8yMb8zow_+sVLfHBkYboOuuuLt7i5E1JOm0w@mail.gmail.com>

On 12 February 2015 at 10:09, Martin Teichmann <lkb.teichmann at gmail.com> wrote:
> def CalculateMetaclass(metatype, type):
>     winner = metatype
>     for b in type.__bases__:
>         if issubclass(winner, b):
>             continue
>         elif issubclass(b, winner):
>             winner = b
>             continue
>         else:
>             raise TypeError("metaclass conflict")
>     return winner
>
> I would replace that with the following function, which uses addition
> istead of issubclass:
>
> def CalculateMetaclass(metatype, type):
>     winner = metatype
>     for b in type.__bases__:
>          try:
>              winner = winner + b
>          except TypeError:
>              raise TypeError("metaclass conflict")

That doesn't seem like a major improvement. You saved a couple if
lines of pretty readable code, for use of a + operator that doesn't
really add things, but rather returns one or the other of its
operands.

And of course, unless you are writing exclusively for Python 3.5
(3.6?) or later, you'd need the original version for backward
compatibility anyway.

Paul

From steve at pearwood.info  Thu Feb 12 11:43:10 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Thu, 12 Feb 2015 21:43:10 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
Message-ID: <20150212104310.GA2498@ando.pearwood.info>

On Thu, Feb 12, 2015 at 01:07:52AM -0800, Ian Lee wrote:
> Alright, I've tried to gather up all of the feedback and organize it in
> something approaching the alpha draft of a PEP might look like::
> 
> 
> Proposed New Methods on dict
> ============================
> 
> Adds two dicts together, returning a new object with the type of the left
> hand operand. This would be roughly equivalent to calling:
> 
>     >>> new_dict = old_dict.copy();
>     >>> new_dict.update(other_dict)

A very strong -1 on the proposal. We already have a perfectly good way 
to spell dict += , namely dict.update. As for dict + on its own, we have 
a way to spell that too: exactly as you write above.

I think this is a feature that is more useful in theory than in 
practice. Which we already have a way to do a merge in place, and a 
copy-and-merge seems like it should be useful but I'm struggling to 
think of any use-cases for it. I've never needed this, and I've never 
seen anyone ask how to do this on the tutor or python-list mailing 
lists.

I certainly wouldn't want to write 

    new_dict = a + b + c + d 

and have O(N**2) performance when I can do this instead

    new_dict = {}
    for old_dict in (a, b, c, d):
        new_dict.update(old_dict)

and have O(N) performance. It's easy to wrap this in a small utility 
function if you need to do it repeatedly.

There was an earlier proposal to add an updated() built-in, by analogy 
with list.sort/sorted, there would be dict.update/updated. Here's the 
version I have in my personal toolbox (feel free to use it, or not, as 
you see fit):


def updated(base, *mappings, **kw):
    """Return a new dict from one or more mappings and keyword arguments.

    The dict is initialised from the first argument, which must be a mapping
    that supports copy() and update() methods. The dict is then updated
    from any subsequent positional arguments, from left to right, followed
    by any keyword arguments.

    >>> d = updated({'a': 1}, {'b': 2}, [('a', 100), ('c', 200)], c=3)
    >>> d == {'a': 100, 'b': 2, 'c': 3}
    True

    """
    new = base.copy()
    for mapping in mappings + (kw,):
        new.update(mapping)
    return new


although as I said, I've never needed to use it in 15+ years. Someday, 
perhaps... Note that because this is built on the update method, it 
supports sequences of (key,value) tuples, and additional keyword 
arguments, which a binary operator cannot do.

I would give a +0.5 on a proposal to add an updated() builtin: I doubt 
that it will be used often, but perhaps it will come in handy from time 
to time. As the experience with list.sort versus sorted() shows, making 
one a method and one a function helps avoid confusion. The risk of 
confusion makes me less enthusiastic about adding a dict.updated() 
method: only +0 for that.

I dislike the use of + for concatenation, but can live with it. This 
operation, however, is not concatenation, nor is it addition. I am 
strongly -1 on the + operator, and -0.75 on | operator.


-- 
Steve

From steve at pearwood.info  Thu Feb 12 11:58:31 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Thu, 12 Feb 2015 21:58:31 +1100
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <CAK9R32Rvf-3mu_bd63wTmVJAaEb47_4+S++kuXR51a73=k7VLQ@mail.gmail.com>
References: <CAK9R32Rvf-3mu_bd63wTmVJAaEb47_4+S++kuXR51a73=k7VLQ@mail.gmail.com>
Message-ID: <20150212105831.GB2498@ando.pearwood.info>

On Thu, Feb 12, 2015 at 11:09:24AM +0100, Martin Teichmann wrote:
> Hi list,
> 
> metaclasses are a really cool thing in python. Unfortunately, it can
> easily get out of hand if you use them too much, once one wants to
> inherit from two classes with different metaclasses one has to go
> some lengths to get things going again. On one hand this is
> unavoidable, as python has no way to guess how two metaclasses
> are to be mixed.
> 
> Often, however, two metaclasses are designed to work together.
> But currently, there is no way to tell the metaclass machinery how
> to mix them.
> 
> I propose to use the + operator to mix metaclasses. This means
> one can simply overwrite __add__ to get a new metaclass mixing
> behavior.

Why the + operator? This doesn't seem to have anything to do with 
addition, or concatentation. Your default implementation of __add__ 
shows that this has nothing to do with anything remotely like the 
standard use of plus:

> def __add__(self, other):
>     if issubclass(self, other):
>         return self
>     return NotImplemented

So why use + instead of ^ or % or ** ? They all seem equally arbitrary.

Just overriding __add__ (or whatever method used) doesn't solve the 
problem. You still have to actually make the metaclasses compatible, so 
that it is meaningful to mix the metaclasses. You wrote:

> They can be easily overwritten to get a different metaclass mixing
> behavior.

I question how easy that would be. If I have understood your proposal 
for metaclass __add__, mc1 + mc2 would have to return a new metaclass, 
mc3, which somehow combines behaviour from the two arguments. Even with 
cooperative metaclasses designed to work together, that sounds to me 
like it would be painful.


-- 
Steve

From rosuav at gmail.com  Thu Feb 12 11:59:35 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Thu, 12 Feb 2015 21:59:35 +1100
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <54DC6B90.4090004@egenix.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <54DC6B90.4090004@egenix.com>
Message-ID: <CAPTjJmozi-WnuYiy2AKCq_eAt04_rDgD3vmRtwiiLA4rjYRW2Q@mail.gmail.com>

On Thu, Feb 12, 2015 at 8:00 PM, M.-A. Lemburg <mal at egenix.com> wrote:
> I think the proper way of doing this would be to add a string
> literal modifier like we have with r'' for raw strings,
> e.g. i'Internationalized String'.
>
> The Python compiler would then map this to a call to
> sys.i18n like so:
>
> i'Internationalized String' -> sys.i18n('Internationalized String')
>
> and applications could set sys.i18n to whatever they use for
> internationalization.

While the syntax is very cute, it seems unnecessary to have compiler
support for something that could just be done with a library. It'd be
impossible to backport code that uses this notation, so you'd have to
either require Python 3.5 (or whatever version), or maintain two
different codebases, or have a 2to3-style precompiler.

Hmm. How hard is it to write a parser for Python code that extends the
grammar in some way? I know there've been a lot of proposals around
the place for decimal.Decimal literals and empty set literals and
such; is there an elegant way to do a proof-of-concept without
touching the C code at all?

ChrisA

From steve at pearwood.info  Thu Feb 12 12:08:21 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Thu, 12 Feb 2015 22:08:21 +1100
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <CAPTjJmozi-WnuYiy2AKCq_eAt04_rDgD3vmRtwiiLA4rjYRW2Q@mail.gmail.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <54DC6B90.4090004@egenix.com>
 <CAPTjJmozi-WnuYiy2AKCq_eAt04_rDgD3vmRtwiiLA4rjYRW2Q@mail.gmail.com>
Message-ID: <20150212110820.GC2498@ando.pearwood.info>

On Thu, Feb 12, 2015 at 09:59:35PM +1100, Chris Angelico wrote:

> Hmm. How hard is it to write a parser for Python code that extends the
> grammar in some way? I know there've been a lot of proposals around
> the place for decimal.Decimal literals and empty set literals and
> such; is there an elegant way to do a proof-of-concept without
> touching the C code at all?

I'm not sure if this is the sort of thing that could be adapted:

http://www.staringispolite.com/likepython/

Obviously LikePython is a joke, but perhaps you could adapt it to do 
what you want?


-- 
Steve

From rosuav at gmail.com  Thu Feb 12 12:27:38 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Thu, 12 Feb 2015 22:27:38 +1100
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <20150212110820.GC2498@ando.pearwood.info>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <54DC6B90.4090004@egenix.com>
 <CAPTjJmozi-WnuYiy2AKCq_eAt04_rDgD3vmRtwiiLA4rjYRW2Q@mail.gmail.com>
 <20150212110820.GC2498@ando.pearwood.info>
Message-ID: <CAPTjJmo1RuuCOPce=D_QJvCkTdWSVcDNjQ-vGzjd+yM0ReM4ZQ@mail.gmail.com>

On Thu, Feb 12, 2015 at 10:08 PM, Steven D'Aprano <steve at pearwood.info> wrote:
> On Thu, Feb 12, 2015 at 09:59:35PM +1100, Chris Angelico wrote:
>
>> Hmm. How hard is it to write a parser for Python code that extends the
>> grammar in some way? I know there've been a lot of proposals around
>> the place for decimal.Decimal literals and empty set literals and
>> such; is there an elegant way to do a proof-of-concept without
>> touching the C code at all?
>
> I'm not sure if this is the sort of thing that could be adapted:
>
> http://www.staringispolite.com/likepython/
>
> Obviously LikePython is a joke, but perhaps you could adapt it to do
> what you want?

Thanks for the link... but turns out it's actually a very simple
tokenizer that simply discards a particular set of tokens, and passes
the rest through to Python. It's a stateless and very simple
preprocessor. I'm hoping for something that's a little more
intelligent. Basically, what I want is Python's own ast.parse(), but
able to be extended to handle specific additional grammar; currently,
ast.parse in CPython drops through to the C parser, which means the
grammar has to be absolutely exactly the grammar of the host language.

I may have to look into 2to3. If you can run 2to3 on Python 3, then it
would have to be capable of parsing something which is invalid syntax
for its host interpreter, which is what I'm looking for. To take a
very simple example, imagine a Python script that can translate print
statements into print functions, running under Python 3; now change
that 'print' to be some new statement that you're mocking (maybe a
'yield from' if we were developing Python 3.3), and you have a means
of POC testing your grammar.

ChrisA

From aj at erisian.com.au  Thu Feb 12 12:39:44 2015
From: aj at erisian.com.au (Anthony Towns)
Date: Thu, 12 Feb 2015 21:39:44 +1000
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <20150127030800.GQ20517@ando.pearwood.info>
References: <CALGmxELKJ7Lyahk__97+ANf_A+jf_cw3d1X0vt1j9NbyEgNs0Q@mail.gmail.com>
 <CAPJVwBkiU8zbYfTwmoZL80mAZ5guJGcgziZdTOQL8JuA_3s4WA@mail.gmail.com>
 <CADiSq7eokB6tFTAt8J710NPKuvzdgu7Cg4ewg0fgZ4czDxNtpg@mail.gmail.com>
 <20150125130326.GG20517@ando.pearwood.info>
 <CALGmxEJ+GxY8vjJ-ciS00WwhcFNd1xd1A+6XTiZm9aSozjSUow@mail.gmail.com>
 <20150126063932.GI20517@ando.pearwood.info>
 <CACac1F9LRN+hRv066DxTsjhwwFYCx0tvngsO46hBRGkc-=_zug@mail.gmail.com>
 <20150127030800.GQ20517@ando.pearwood.info>
Message-ID: <CAJS_LCX_KpDsfF1zaYiLaDnP9uYNZrUDysr=6Vh6PRajwR=9hg@mail.gmail.com>

On 27 January 2015 at 13:08, Steven D'Aprano <steve at pearwood.info> wrote:

> Symmetry and asymmetry of "close to" is a side-effect of the way you
> calculate the fuzzy comparison. In real life, "close to" is always
> symmetric because distance is the same whether you measure from A to B
> or from B to A. The distance between two numbers is their difference,
> which is another way of saying the error between them:
>
> delta = abs(x - y)
>
> (delta being the traditional name for this quantity in mathematics), and
> obviously delta doesn't depend on the order of x and y.
>

Asymmetry is bad, because it is rather surprising and counter-intuitive
> that "x is close to y", but "y is not close to x". It may also be bad in
> a practical sense, because people will forget which order they need to
> give x and y and will give them in the wrong order. I started off with
> an approx_equal function in test_statistics that was symmetric, and I
> could never remember which way the arguments went.
>

Time permitting, over the next day or so I'll draw up some diagrams to
> show how each of these tactics change what counts as close or not close.
>

?If you consider the comparsion to be:

   abs(x-y) <= rel_tol * ref

where "ref" is your "reference" value, then all of these are questions
about what "ref" is. Possibilities include:

 * ref = abs(x)
     (asymmetric version, useful for comparing against a known figure)
 * ref = max(abs(x),abs(y))
     (symmetric version)
 * ref = abs(x)+abs(y) or (abs(x)+abs(y))/2
    ?
??
? (alternate symmetric version)?

?* ref = zero_tol / rel_tol
     (for comparisons against zero)?
? * ref = abs_tol/rel_tol
     (for completeness)??

?If you're saying:

 >>> z = 1.0 - sum([0.1]*10)?
?????
? >>> z == 0
 False
 >>> is_close(0.0, z)
 True

your "reference" value is probably really "1.0" or "0.1" since those are
the values you're working with, but neither of those values are derivable
from the arguments provided to is_close().

Assuming x,y are non-negative and is_close(x,y,rel_tol=r):

 ref = x:
   -rx <= y-x <= rx

 ref = max(x,y):
   -rx <= y-x <= ry

 ref = (x+y)/2:
   -r*(x+y)/2 <= y-x <= r*(x+y)/2
?   ?
?If you set r and x as a constant?, then the amounts y can be (below,
above) x for the cases above are:

 rx, rx
 rx, rx/(1-r)
 rx/(1+r/2), rx/(1-r/2)

Since r>0, 1-r != 1, and 1+r/2 != 1-r/2, so these each give slightly
different ranges for a valid y. They're pretty trivial differences though;
eg r=1e-8 and x=10 gives:

 rx         = 1e-7
 rx/(1-r)   = 1.00000001e-07
 rx/(1-r/2) = 1.000000005e-07
 rx/(1+r/2) = 0.999999995e-07

If you're looking at 10% margins for a nominally 100 Ohm resistor (r=0.1,
x=100), that'd translate to deltas of:

 rx         = 10.0
 rx/(1-r)   = 11.11
 rx/(1-r/2) = 10.526
 rx/(1+r/2) =  9.524

Having an implementation like:

 def is_close(a, b=None, tol=1e-8, ref=None):
   assert (a != 0 and b != 0) or ref is not None
   if b is None:
     assert ref is not None
     b = ref
   if ref is None:
     ref = abs(a)+abs(b)
   return abs(a-b) <= tol*ref

might give you the best of all worlds -- it would let you say things like:

  >>> is_close(1.0, sum([0.1]*10))
  True

  >>> is_close(11, ref=10, tol=0.1)
  True

  >>> n = 26e10
  >>> a = n - sum([n/6]*6)
  >>> b = n - sum([n/7]*7)
  >>> a, b
  (-3.0517578125e-05, 0.0)
  >>> is_close(a, b, ref=n)
  True
  >>> is_close(a, b, ref=1)
  False
  >>> is_close(a, b)
  AssertionError

and get reasonable looking results, I think? (If you want to use an
absolute tolerance, you just specify ref=1, tol=abs_tol).

An alternative thought: rather than a single "is_close" function, maybe it
would make sense for is_close to always be relative, and just provide a
separate function for absolute comparisons, ie:

 def is_close(a, b, tol=1e-8):
    assert a != 0 and b != 0
    # or assert (a==0) == (b==0)
    return abs(a-b) <= tol*(a+b)

 def is_close_abs(a,b, tol=1e-8):
    return abs(a-b) <= tol

?
? def is_near_zero(a, tol=1e-8):
    return abs(a) <= tol?

?

Then you'd use is_close() when you wanted something symmetric and easy, and
were mopre interested in rough accuracy than absolute precision?, and if
you wanted to do a 10% resistor check you'd either say:

   is_close_abs(r, 100, tol=10)

or

   is_near_zero(a-100, tol=10)

If you had a sequence of numbers and wanted to do both relative comparisons
(first n significant digits match) and absolute comparisons you'd just have
to say:

  for a in nums:
     assert is_close(a, b) or is_close_abs(a, b)

?which doesn't seem that onerous.?

?Cheers,
aj?

-- 
Anthony Towns <aj at erisian.com.au>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/4660ba4d/attachment-0001.html>

From wichert at wiggy.net  Thu Feb 12 12:41:37 2015
From: wichert at wiggy.net (Wichert Akkerman)
Date: Thu, 12 Feb 2015 12:41:37 +0100
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <54DC6B90.4090004@egenix.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <54DC6B90.4090004@egenix.com>
Message-ID: <68019871-BE1D-4E20-8672-D2BC6CA2EB88@wiggy.net>


> On 12 Feb 2015, at 10:00, M.-A. Lemburg <mal at egenix.com> wrote:
> 
> On 12.02.2015 04:33, Terry Reedy wrote:
>> A possible solution would be to give 3.x str a __pos__ or __neg__ method, so +'python' or -'python'
>> would mean the translation of 'python' (the snake).  This would be even *less* intrusive and easier
>> to write than _('python').
> 
> Doesn't read well:
> 
> label = +'I like' + +'Python'
> 
> and this is even worse, IMO:
> 
> label = -'I like' + -'Python'
> 
> 
> I think the proper way of doing this would be to add a string
> literal modifier like we have with r'' for raw strings,
> e.g. i'Internationalized String?.

As far as I know every other language as well as all i18n-related tools use _(..) to mark translatable strings. Changing Python from using that same standard (as most, if not all i18n-supporting Python software does) to something sounds like a huge step backwards.

Wichert.


From p.f.moore at gmail.com  Thu Feb 12 13:06:39 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Thu, 12 Feb 2015 12:06:39 +0000
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <CAPTjJmo1RuuCOPce=D_QJvCkTdWSVcDNjQ-vGzjd+yM0ReM4ZQ@mail.gmail.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <54DC6B90.4090004@egenix.com>
 <CAPTjJmozi-WnuYiy2AKCq_eAt04_rDgD3vmRtwiiLA4rjYRW2Q@mail.gmail.com>
 <20150212110820.GC2498@ando.pearwood.info>
 <CAPTjJmo1RuuCOPce=D_QJvCkTdWSVcDNjQ-vGzjd+yM0ReM4ZQ@mail.gmail.com>
Message-ID: <CACac1F-Vt-NWyoVL=aOXuWkzD16irD-F26124C7AVVUR86acqw@mail.gmail.com>

On 12 February 2015 at 11:27, Chris Angelico <rosuav at gmail.com> wrote:
> Basically, what I want is Python's own ast.parse(), but
> able to be extended to handle specific additional grammar; currently,
> ast.parse in CPython drops through to the C parser, which means the
> grammar has to be absolutely exactly the grammar of the host language.

You may want to take a look at MacroPy
(https://github.com/lihaoyi/macropy). I think it's aimed at the sort
of thing you're talking about.

Paul

From donald at stufft.io  Thu Feb 12 13:25:32 2015
From: donald at stufft.io (Donald Stufft)
Date: Thu, 12 Feb 2015 07:25:32 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150212104310.GA2498@ando.pearwood.info>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
Message-ID: <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>


> On Feb 12, 2015, at 5:43 AM, Steven D'Aprano <steve at pearwood.info> wrote:
> 
> On Thu, Feb 12, 2015 at 01:07:52AM -0800, Ian Lee wrote:
>> Alright, I've tried to gather up all of the feedback and organize it in
>> something approaching the alpha draft of a PEP might look like::
>> 
>> 
>> Proposed New Methods on dict
>> ============================
>> 
>> Adds two dicts together, returning a new object with the type of the left
>> hand operand. This would be roughly equivalent to calling:
>> 
>>>>> new_dict = old_dict.copy();
>>>>> new_dict.update(other_dict)
> 
> A very strong -1 on the proposal. We already have a perfectly good way 
> to spell dict += , namely dict.update. As for dict + on its own, we have 
> a way to spell that too: exactly as you write above.
> 
> I think this is a feature that is more useful in theory than in 
> practice. Which we already have a way to do a merge in place, and a 
> copy-and-merge seems like it should be useful but I'm struggling to 
> think of any use-cases for it. I've never needed this, and I've never 
> seen anyone ask how to do this on the tutor or python-list mailing 
> lists.

I?ve wanted this several times, explicitly the copying variant of it. I always
get slightly annoyed whenever I have to manually spell out the copy and the
update.

I still think it should use | rather than + though, to match sets.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


From mal at egenix.com  Thu Feb 12 14:09:10 2015
From: mal at egenix.com (M.-A. Lemburg)
Date: Thu, 12 Feb 2015 14:09:10 +0100
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <68019871-BE1D-4E20-8672-D2BC6CA2EB88@wiggy.net>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>	<CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>	<20150211164615.1fe33cab@fsol>	<CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>	<mbh6u0$fr7$1@ger.gmane.org>
 <54DC6B90.4090004@egenix.com>
 <68019871-BE1D-4E20-8672-D2BC6CA2EB88@wiggy.net>
Message-ID: <54DCA5F6.7080500@egenix.com>

On 12.02.2015 12:41, Wichert Akkerman wrote:
> 
>> On 12 Feb 2015, at 10:00, M.-A. Lemburg <mal at egenix.com> wrote:
>>
>> On 12.02.2015 04:33, Terry Reedy wrote:
>>> A possible solution would be to give 3.x str a __pos__ or __neg__ method, so +'python' or -'python'
>>> would mean the translation of 'python' (the snake).  This would be even *less* intrusive and easier
>>> to write than _('python').
>>
>> Doesn't read well:
>>
>> label = +'I like' + +'Python'
>>
>> and this is even worse, IMO:
>>
>> label = -'I like' + -'Python'
>>
>>
>> I think the proper way of doing this would be to add a string
>> literal modifier like we have with r'' for raw strings,
>> e.g. i'Internationalized String?.
> 
> As far as I know every other language as well as all i18n-related tools use _(..) to mark translatable strings. Changing Python from using that same standard (as most, if not all i18n-supporting Python software does) to something sounds like a huge step backwards.

Depends on how you look at it, I guess :-)

_(...) is a kludge which was invented because languages don't address
the need at the language level. i"want this translated" would be such
language level syntax.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Feb 12 2015)
>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>> mxODBC Plone/Zope Database Adapter ...       http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::::: Try our mxODBC.Connect Python Database Interface for free ! ::::::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/

From apalala at gmail.com  Thu Feb 12 14:32:04 2015
From: apalala at gmail.com (=?UTF-8?Q?Juancarlo_A=C3=B1ez?=)
Date: Thu, 12 Feb 2015 09:02:04 -0430
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150212104310.GA2498@ando.pearwood.info>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
Message-ID: <CAN1YFWt4C6S-8ST-=7Z35iXu827p1Yjk4+FaMyK5ZvDmFastUA@mail.gmail.com>

On Thu, Feb 12, 2015 at 6:13 AM, Steven D'Aprano <steve at pearwood.info>
wrote:

> I certainly wouldn't want to write
>
>     new_dict = a + b + c + d
>
> and have O(N**2) performance when I can do this instead
>

It would likely not be O(N), but O(N**2) seems exaggerated. Besides, the
most common cases would be:


    new_dict = a + b
    old_dict += a

Both of which can be had in O(N).


>
>     new_dict = {}
>     for old_dict in (a, b, c, d):
>         new_dict.update(old_dict)
>

It would be easier if we could do:

    new_dict = {}
    new_dict.update(a, b, c, d)

It would also be useful if dict.update() returned self, so this would be
valid:

    new_dict = {}.update(a, b, c, d)

If that was so, then '+' could perhaps be implemented in terms of update()
with the help of some smarts from the parser.

Cheers,

-- 
Juancarlo *A?ez*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/af484513/attachment.html>

From ron3200 at gmail.com  Thu Feb 12 14:34:52 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Thu, 12 Feb 2015 07:34:52 -0600
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <54DC6B90.4090004@egenix.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>	<CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>	<20150211164615.1fe33cab@fsol>	<CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <54DC6B90.4090004@egenix.com>
Message-ID: <mbia5t$4q7$1@ger.gmane.org>



On 02/12/2015 03:00 AM, M.-A. Lemburg wrote:
> On 12.02.2015 04:33, Terry Reedy wrote:
>> >A possible solution would be to give 3.x str a __pos__ or __neg__ method, so +'python' or -'python'
>> >would mean the translation of 'python' (the snake).  This would be even*less*  intrusive and easier
>> >to write than _('python').
> Doesn't read well:
>
> label = +'I like' + +'Python'
>
> and this is even worse, IMO:
>
> label = -'I like' + -'Python'
>
>
> I think the proper way of doing this would be to add a string
> literal modifier like we have with r'' for raw strings,
> e.g. i'Internationalized String'.

Another option may be to add a '_' property to the string type as an 
alternate interface to Gettext, and possibly for use with other 
translators.  The '_' property would still access a global function, but it 
could have a much longer and more meaningful name like __string_translator__.

    label = 'I like'._ + 'Python'._

Then in cases where '_' is used locally as a temp name, like with lambda, 
you can use the '_' property instead of the global '_' function.


An advantage of the 'i' prefix notation is the translation could be done at 
parse time instead of run time.  But that may be hard to do because all the 
parts may not be available at parse time. (?)

-Ron


From lkb.teichmann at gmail.com  Thu Feb 12 14:42:01 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Thu, 12 Feb 2015 14:42:01 +0100
Subject: [Python-ideas] A (meta)class algebra
Message-ID: <CAK9R32TLOj=bEmSqxCrS_FmbUPFX276JEfbg6Djmte32KpSBHA@mail.gmail.com>

Hi everyone,

I guess I need to give an example to show what my proposal is about.
Imagine a metaclass which registers all classes based on it:

class MetaRegistrar(type):
    def __init__(self, name, bases, dict):
        super().__init__(name, bases, dict)
        Registrar.classlist.append(self)

class Registrar(object, metaclass=MetaRegistrar):
    classlist = []

now every subclass of Registrar is registered into its classlist. Imagine
now you want to register a class which inherits another metaclass.
A real world example I stumbled over was using a QObject from PyQt4.
Then you have:

class Spam(Registrar, QObject):
    pass

which gives you a nice error message. Sure, you can now define a new
metaclass:

class MetaQRegistrar(MetaRegistrar, type(QObject)):
    pass

but then you always have to write something like

class Spam(Registrar, QObject, metaclass=MetaQRegistrar):
    pass

everyone who uses Registrar needs to know the details of how those classes
can be combined. With my addition, one could simply add a method
to MetaRegistrar:

def __add__(self, other):
    class Combination(self, other):
        pass
    return Combination

which mixes MetaRegistrar into whatever comes. Sure, this is a very
simplified example, one would cache the created classes, and maybe
one wants to register classes that can be mixed.

So, I hope this helps to illustrate my idea, now I'm answering specific
questions raised:

> Why the + operator? This doesn't seem to have anything to do with
> addition, or concatentation.

well, technically I'm fine with any method name, one of the binary
operators just has the advantage that all the details with when
__add__ as opposed to __radd__ is called are all sorted out already.
But maybe the | operator is a better choice, as mixing classes is more
like a union of two sets.

> Your default implementation of __add__
> shows that this has nothing to do with anything remotely like the
> standard use of plus:

That's actually not true. My methods combine classes, and if one is
already a subclass of the other, they are already mixed, so yes,
it is like a standard union of sets (I start to like the idea of |)

> Just overriding __add__ (or whatever method used) doesn't solve the
> problem. You still have to actually make the metaclasses compatible, so
> that it is meaningful to mix the metaclasses.

Well, it does solve the problem. Before it was not at all possible to
programmatically make metaclasses compatible, you had to write it
down explicitly, each time of use.

>> They can be easily overwritten to get a different metaclass mixing
>> behavior.
>
> I question how easy that would be. If I have understood your proposal
> for metaclass __add__, mc1 + mc2 would have to return a new metaclass,
> mc3, which somehow combines behaviour from the two arguments. Even with
> cooperative metaclasses designed to work together, that sounds to me
> like it would be painful.

well, to give an example:

class MetaClassMixin(type):
    def __add__(self, other):
        class ret(self, other):
            pass
         return ret

just mixes your class into the other class.

Greetings

Martin

From random832 at fastmail.us  Thu Feb 12 15:46:51 2015
From: random832 at fastmail.us (random832 at fastmail.us)
Date: Thu, 12 Feb 2015 09:46:51 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DC26C9.4060801@canterbury.ac.nz>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC26C9.4060801@canterbury.ac.nz>
Message-ID: <1423752411.2031137.226666513.33573474@webmail.messagingengine.com>

On Wed, Feb 11, 2015, at 23:06, Greg wrote:
> I think associativity is the property in question, and it
> does hold for string and tuple concatenation.
> 
> Dict addition could be made associative by raising an
> exception on duplicate keys.

Why isn't it associative otherwise?

From random832 at fastmail.us  Thu Feb 12 15:49:39 2015
From: random832 at fastmail.us (random832 at fastmail.us)
Date: Thu, 12 Feb 2015 09:49:39 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150212104310.GA2498@ando.pearwood.info>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
Message-ID: <1423752579.2031972.226667741.7E245874@webmail.messagingengine.com>

On Thu, Feb 12, 2015, at 05:43, Steven D'Aprano wrote:
> I think this is a feature that is more useful in theory than in 
> practice. Which we already have a way to do a merge in place, and a 
> copy-and-merge seems like it should be useful but I'm struggling to 
> think of any use-cases for it. I've never needed this, and I've never 
> seen anyone ask how to do this on the tutor or python-list mailing 
> lists.

It's also an attractive nuisance - most cases I can think of for it
would be better served by a view.

From encukou at gmail.com  Thu Feb 12 15:52:07 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Thu, 12 Feb 2015 15:52:07 +0100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAN1YFWt4C6S-8ST-=7Z35iXu827p1Yjk4+FaMyK5ZvDmFastUA@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <CAN1YFWt4C6S-8ST-=7Z35iXu827p1Yjk4+FaMyK5ZvDmFastUA@mail.gmail.com>
Message-ID: <CA+=+wqDwQ-1=PzJcMV0LPkkXgQdbN0qJSoaYDv+sXKDims6QHg@mail.gmail.com>

I don't see ChainMap mentioned in this thread, so I'll fix that:

On Thu, Feb 12, 2015 at 2:32 PM, Juancarlo A?ez <apalala at gmail.com> wrote:

> most common cases would be:
>
>
>     new_dict = a + b
>     old_dict += a

In today's Python, that's:
from collections import ChainMap
new_dict = dict(ChainMap(b, a))
old_dict.update(a)

>>     new_dict = {}
>>     for old_dict in (a, b, c, d):
>>         new_dict.update(old_dict)
>
>
> It would be easier if we could do:
>
>     new_dict = {}
>     new_dict.update(a, b, c, d)
>
> It would also be useful if dict.update() returned self, so this would be
> valid:
>
>     new_dict = {}.update(a, b, c, d)

In today's Python:
new_dict = dict(ChainMap(d, b, c, a))

Many uses don't need the dict() call ? e.g. when passing it **kwargs,
or when it's more useful as a view.

Personally, the lack of a special operator for this has never bothered me.

From rob.cliffe at btinternet.com  Thu Feb 12 15:58:13 2015
From: rob.cliffe at btinternet.com (Rob Cliffe)
Date: Thu, 12 Feb 2015 14:58:13 +0000
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <54DBB65F.1080108@canterbury.ac.nz>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <54DB695F.2080502@sotecware.net>
 <CAG8k2+4o53qVbOzo4XmdchonFBTEfcWamByWfZ9bao0=XiHauw@mail.gmail.com>
 <54DBB65F.1080108@canterbury.ac.nz>
Message-ID: <54DCBF85.3060905@btinternet.com>

On 11/02/2015 20:06, Greg Ewing wrote:
> Daniel Holth wrote:
>> How about zero characters between commas? In ES6 destructing
>> assignment you'd writes something like ,,c = (1,2,3)
>
> Wouldn't play well with the syntax for 1-element tuples:
>
>    c, = stuff
>
> Is len(stuff) expected to be 1 or 2?
>
Does that matter?
In either case the intent is to evaluate stuff and assign its first 
element to c.
We could have a rule that when the left hand side of an assignment is a 
tuple that ends in a final comma then unpacking stops at the final 
comma, and it doesn't matter whether there are any more values to 
unpack.  (Perhaps multiple final commas could mean "unpack until the 
last comma then stop, so
     c,,, = stuff
would mean "raise an exception if len(stuff)<3; otherwise assign the 
first element of stuff to c".)
It could even work for infinite generators!

I think it's a neat idea, and it would be good if it could be made to work.
Admittedly it would be strictly backwards-incompatible, since
     c, = [1,2]
would assign 1 to c rather than (as now) raising ValueError.
But making something "work" instead of raising an exception is likely to 
break much less code that changing/stopping something from "working".
Rob Cliffe


From rob.cliffe at btinternet.com  Thu Feb 12 16:08:54 2015
From: rob.cliffe at btinternet.com (Rob Cliffe)
Date: Thu, 12 Feb 2015 15:08:54 +0000
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CA+=+wqDwQ-1=PzJcMV0LPkkXgQdbN0qJSoaYDv+sXKDims6QHg@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <CAN1YFWt4C6S-8ST-=7Z35iXu827p1Yjk4+FaMyK5ZvDmFastUA@mail.gmail.com>
 <CA+=+wqDwQ-1=PzJcMV0LPkkXgQdbN0qJSoaYDv+sXKDims6QHg@mail.gmail.com>
Message-ID: <54DCC206.8010604@btinternet.com>


On 12/02/2015 14:52, Petr Viktorin wrote:
> I don't see ChainMap mentioned in this thread, so I'll fix that:
>
> On Thu, Feb 12, 2015 at 2:32 PM, Juancarlo A?ez <apalala at gmail.com> wrote:
>
> [snip]
>>
>> It would also be useful if dict.update() returned self, so this would be
>> valid:
>>
>>      new_dict = {}.update(a, b, c, d)
> In today's Python:
> new_dict = dict(ChainMap(d, b, c, a))
>
> Many uses don't need the dict() call ? e.g. when passing it **kwargs,
> or when it's more useful as a view.
>
> Personally, the lack of a special operator for this has never bothered me.
> _______________________________________________
>
But perhaps the fact that you have, as far as I can see, transposed b 
and c indicates that this is not the most user-friendly API. :-)
Rob Cliffe

From skip.montanaro at gmail.com  Thu Feb 12 16:21:36 2015
From: skip.montanaro at gmail.com (Skip Montanaro)
Date: Thu, 12 Feb 2015 09:21:36 -0600
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
Message-ID: <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>

On Wed, Feb 11, 2015 at 9:59 PM, Chris Angelico <rosuav at gmail.com> wrote:
> Does it have to be? It isn't commutative for strings or tuples either.

Good point. I never think of string "addition" as anything other than
concatenation. I guess the same would be true of dictionaries. Still,
if I add two strings, lists or tuples, the len() of the result is the
same as the sum of the len()s. That wouldn't necessarily be the case
for dictionaries, which, though well-defined, still can leave you open
for a bit of a surprise when the keys overlap.

Skip

From rosuav at gmail.com  Thu Feb 12 16:46:45 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Fri, 13 Feb 2015 02:46:45 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
Message-ID: <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>

On Fri, Feb 13, 2015 at 2:21 AM, Skip Montanaro
<skip.montanaro at gmail.com> wrote:
> On Wed, Feb 11, 2015 at 9:59 PM, Chris Angelico <rosuav at gmail.com> wrote:
>> Does it have to be? It isn't commutative for strings or tuples either.
>
> Good point. I never think of string "addition" as anything other than
> concatenation. I guess the same would be true of dictionaries. Still,
> if I add two strings, lists or tuples, the len() of the result is the
> same as the sum of the len()s. That wouldn't necessarily be the case
> for dictionaries, which, though well-defined, still can leave you open
> for a bit of a surprise when the keys overlap.

Yes, but we already know that programming isn't the same as
mathematics. That's even true when we're working with numbers -
usually because of representational limitations - but definitely so
when analogies like "addition" are extended to other data types. But
there are real-world parallels here. Imagine if two people
independently build shopping lists - "this is the stuff we need" - and
then combine them. You'd describe that as "your list and my list", or
"your list plus my list" (but not "your list or my list"; English and
maths tend to get "and" and "or" backward to each other in a lot of
ways), and the resulting list would basically be set union of the
originals. If you have room on one of them, you could go through the
other and add all the entries that aren't already present; otherwise,
you grab a fresh sheet of paper, and start merging the lists. (If
you're smart, you'll group similar items together as you merge. That's
where the analogy breaks down, though.) It makes fine sense to take
two shopping lists and combine them into a new one, discarding any
duplicates.

my_list = {"eggs": 1, "sugar": 1, "milk": 3, "bacon": 5}
your_list = {"milk": 1, "eggs": 1, "sugar": 2, "coffee": 2}

(let's assume we know what units everything's in)

The "sum" of these two lists could possibly be defined as the greater
of the two values, treating them as multisets; but if the assumption
is that you keep track of most of what we need, and you're asking me
if you've missed anything, then having your numbers trump mine makes
fine sense.

I have no problem with the addition of dictionaries resulting in the
set-union of their keys, and the values following them in whatever way
makes the most reasonable sense. Fortunately Python actually has
separate list and dict types, so we don't get the mess of PHP array
merging...

ChrisA

From barry at python.org  Thu Feb 12 16:53:52 2015
From: barry at python.org (Barry Warsaw)
Date: Thu, 12 Feb 2015 10:53:52 -0500
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <85y4o3dcg2.fsf@benfinney.id.au>
Message-ID: <20150212105352.6bf02477@anarchist.wooz.org>

On Feb 12, 2015, at 02:40 PM, Ben Finney wrote:

>Gettext has been using the name ?_? since 1995, which was in Python's
>infancy. The name ?_? is pretty obvious, I'm sure it was in use before
>Python came along.

Exactly.  All this was designed for compatibility with GNU gettext conventions
such as in i18n'd C programs.  It was especially useful to interoperate with
source text extraction tools.  We were aware of the collision with the
interactive interpreter convenience but that wasn't deemed enough to buck
established convention.  Using _ to imply "ignore this value" came along
afterward.

Cheers,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/087dc257/attachment.sig>

From chris.barker at noaa.gov  Thu Feb 12 17:00:41 2015
From: chris.barker at noaa.gov (Chris Barker - NOAA Federal)
Date: Thu, 12 Feb 2015 08:00:41 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
Message-ID: <5902460774982975660@unknownmsgid>

+1 on +=

Isn't that confusing ;-)

Yes, we have dict.update(), but the entire reason += exists (all the
augmented assignment operators -- they are called that, yes?) is
because we didn't want methods for everything.

It particular, I believe a major motivator was numpy, where in-place
operations are a major performance boost. Sure, numpy could have added
methods for all those, but the operator syntax is so much nicer, why
not use it where we can?

And I prefer + to | ... Seems much more obvious to me.

Chris


-Chris

From masklinn at masklinn.net  Thu Feb 12 17:03:33 2015
From: masklinn at masklinn.net (Masklinn)
Date: Thu, 12 Feb 2015 17:03:33 +0100
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <68019871-BE1D-4E20-8672-D2BC6CA2EB88@wiggy.net>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <54DC6B90.4090004@egenix.com>
 <68019871-BE1D-4E20-8672-D2BC6CA2EB88@wiggy.net>
Message-ID: <F27BC297-13DF-4CA0-89EF-F7D0119B821D@masklinn.net>

On 2015-02-12, at 12:41 , Wichert Akkerman <wichert at wiggy.net> wrote:
>> On 12 Feb 2015, at 10:00, M.-A. Lemburg <mal at egenix.com> wrote:
>> 
>> On 12.02.2015 04:33, Terry Reedy wrote:
>>> A possible solution would be to give 3.x str a __pos__ or __neg__ method, so +'python' or -'python'
>>> would mean the translation of 'python' (the snake).  This would be even *less* intrusive and easier
>>> to write than _('python').
>> 
>> Doesn't read well:
>> 
>> label = +'I like' + +'Python'
>> 
>> and this is even worse, IMO:
>> 
>> label = -'I like' + -'Python'
>> 
>> 
>> I think the proper way of doing this would be to add a string
>> literal modifier like we have with r'' for raw strings,
>> e.g. i'Internationalized String?.
> 
> As far as I know every other language as well as all i18n-related tools use _(..) to mark translatable strings.

In many functional languages `_` is a universal value-less matcher (that
is it pattern-matches everything, but is nonsensical as a value).

From barry at python.org  Thu Feb 12 17:09:29 2015
From: barry at python.org (Barry Warsaw)
Date: Thu, 12 Feb 2015 11:09:29 -0500
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org>
Message-ID: <20150212110929.1cc16d6c@anarchist.wooz.org>

On Feb 11, 2015, at 10:33 PM, Terry Reedy wrote:

>A possible solution would be to give 3.x str a __pos__ or __neg__ method, so
>+'python' or -'python' would mean the translation of 'python' (the snake).
>This would be even *less* intrusive and easier to write than _('python').

This is really unnecessary.  These days, I believe standard GNU xgettext can
parse Python code and can be told to use an alternative marker function.
Tools/i18n/pygettext.py certainly can.

Aside from all that, IIRC pygettext (and maybe xgettext) will only recognize
underscore function calls that take exactly one string.  That doesn't help the
module global collision issue, but oh well.

I might as well use this opportunity to plug my flufl.i18n package:

http://pythonhosted.org/flufl.i18n/

It provides a nice, higher level API for doing i18n.  It also does not install
_ into globals by default.

(FWIW, the reason why stdlib gettext installs by default is for ease of use.
People preferred to do the i18n setup in their main module and then just have
that name automatically available everywhere else.  I'm not defending that,
just pointing out the history of this particular baggage.)

Cheers,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/9c820219/attachment-0001.sig>

From alexander.belopolsky at gmail.com  Thu Feb 12 17:18:16 2015
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 12 Feb 2015 11:18:16 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150212104310.GA2498@ando.pearwood.info>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
Message-ID: <CAP7h-xY+7fq=ax77KvoM=8V-qntU9Y--H2a8dToubSTMcxVbCQ@mail.gmail.com>

On Thu, Feb 12, 2015 at 5:43 AM, Steven D'Aprano <steve at pearwood.info>
wrote:

> A very strong -1 on the proposal. We already have a perfectly good way
> to spell dict += , namely dict.update. As for dict + on its own, we have
> a way to spell that too: exactly as you write above.
>

Another strong -1 from me.  In my view, the only operation on dictionaries
that would deserve to be denoted + would be counter or sparse array
addition, but we already have collections.Counter and writing a sparse
array (as in sa(a=1,c=-2) + sa(a=1,b=1,c=1) == sa(a=2,b=1,c=-1)) is a
simple exercise.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/d66418e1/attachment.html>

From steve at pearwood.info  Thu Feb 12 17:21:25 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 13 Feb 2015 03:21:25 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
Message-ID: <20150212162125.GD2498@ando.pearwood.info>

On Fri, Feb 13, 2015 at 02:46:45AM +1100, Chris Angelico wrote:

> Imagine if two people independently build shopping lists

Sorry, was that shopping *lists* or shopping *dicts*?

> - "this is the stuff we need" - and
> then combine them. You'd describe that as "your list and my list", or
> "your list plus my list" (but not "your list or my list"; English and
> maths tend to get "and" and "or" backward to each other in a lot of
> ways), and the resulting list would basically be set union of the
> originals. If you have room on one of them, you could go through the
> other and add all the entries that aren't already present; otherwise,
> you grab a fresh sheet of paper, and start merging the lists. (If
> you're smart, you'll group similar items together as you merge. That's
> where the analogy breaks down, though.) It makes fine sense to take
> two shopping lists and combine them into a new one, discarding any
> duplicates.

Why would you discard duplicates? If you need 2 loaves of bread, and I 
need 1 loaf of bread, and the merged shopping list has anything less 
than 3 loaves of bread, one of us is going to miss out.



-- 
Steve

From rosuav at gmail.com  Thu Feb 12 17:27:15 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Fri, 13 Feb 2015 03:27:15 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150212162125.GD2498@ando.pearwood.info>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
Message-ID: <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>

On Fri, Feb 13, 2015 at 3:21 AM, Steven D'Aprano <steve at pearwood.info> wrote:
> On Fri, Feb 13, 2015 at 02:46:45AM +1100, Chris Angelico wrote:
>
>> Imagine if two people independently build shopping lists
>
> Sorry, was that shopping *lists* or shopping *dicts*?

I've never heard anyone in the real world talk about "shopping dicts"
or "shopping arrays" or "shopping collections.Sequences". It's always
"shopping lists". The fact that I might choose to represent one with a
dict is beside the point. :)

>> - "this is the stuff we need" - and
>> then combine them.
>
> Why would you discard duplicates? If you need 2 loaves of bread, and I
> need 1 loaf of bread, and the merged shopping list has anything less
> than 3 loaves of bread, one of us is going to miss out.
>

I live in a house with a lot of people. When I was younger, Mum used
to do most of the shopping, and she'd keep an eye on the fridge and
usually know what needed to be replenished; but some things she didn't
monitor, and relied on someone else to check them. If there's some
overlap in who checks what, we're all going to notice the same need -
we don't have separate requirements here. One person notes that we're
down to our last few eggs and should buy another dozen; another person
also notes that we're down to our last few eggs, but thinks we should
probably get two dozen. Getting *three* dozen is definitely wrong
here. And definitely her views on how much we should buy would trump
my own, as my experience was fairly minimal.

(These days, most of us are adult, and matters are a bit more
complicated. So I'm recalling "the simpler days of my youth" for the
example.)

ChrisA

From chris.barker at noaa.gov  Thu Feb 12 17:41:52 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Thu, 12 Feb 2015 08:41:52 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CAJS_LCX_KpDsfF1zaYiLaDnP9uYNZrUDysr=6Vh6PRajwR=9hg@mail.gmail.com>
References: <CALGmxELKJ7Lyahk__97+ANf_A+jf_cw3d1X0vt1j9NbyEgNs0Q@mail.gmail.com>
 <CAPJVwBkiU8zbYfTwmoZL80mAZ5guJGcgziZdTOQL8JuA_3s4WA@mail.gmail.com>
 <CADiSq7eokB6tFTAt8J710NPKuvzdgu7Cg4ewg0fgZ4czDxNtpg@mail.gmail.com>
 <20150125130326.GG20517@ando.pearwood.info>
 <CALGmxEJ+GxY8vjJ-ciS00WwhcFNd1xd1A+6XTiZm9aSozjSUow@mail.gmail.com>
 <20150126063932.GI20517@ando.pearwood.info>
 <CACac1F9LRN+hRv066DxTsjhwwFYCx0tvngsO46hBRGkc-=_zug@mail.gmail.com>
 <20150127030800.GQ20517@ando.pearwood.info>
 <CAJS_LCX_KpDsfF1zaYiLaDnP9uYNZrUDysr=6Vh6PRajwR=9hg@mail.gmail.com>
Message-ID: <CALGmxEJggLB4steNVPF060Ou-BV6PxhSuA_dbBHOHWivhfe=8Q@mail.gmail.com>

This has been very thoughoughly hashed out, so I'll comment on the bits
that are new(ish):

?If you're saying:
>
>  >>> z = 1.0 - sum([0.1]*10)?
> ?????
> ? >>> z == 0
>  False
>  >>> is_close(0.0, z)
>  True
>
> your "reference" value is probably really "1.0" or "0.1" since those are
> the values you're working with, but neither of those values are derivable
> from the arguments provided to is_close().
>

this is simple -- don't do that!

isclose(1.0, sum([0.1]*10)

Is the right thing to do here. and it will work with any of the methods
that were ever on the table. If you really want to check for something
close to zero, you need an absolute tolerance, not really a reference value
(they are kind of the same, but absolute tolerance is easier to reason
about:

z = 1.0 - sum([0.1]*10)?
?????

>>> is_close(0.0, z, abs_tol=1e-12)
 True

 def is_close(a, b=None, tol=1e-8, ref=None):
>

I think a b-None defaut would be really confusing ot people!

Setting an option reference value (I'd probably call it 'scale_val' or
something like that might make sense:

def isclose(a, b, rel_tol=1e-9, abs_tol=0.0, scale_val=None):

Then we use the scale_val if it's defined, and max(a,b) if it's not --
something like:

if scale_val is  not None:
    return abs(a-b) <= abs(rel_tol* scale_val) or abs(a-b) <= abs_tol
else
   return abs(a-b) <= abs(rel_tol*a) or abs(a-b) <= abs(rel_tol*b) or
abs(a-b) <= abs_tol

This would let users do it pretty much anyway they want, while still
allowing the most common use case to use all defaults -- if you don't know
what the heck scale_val means, then simply ignore it.

However, I think that it would require enough thought to use scale_val that
users might as well simply write that one line of code themselves.

and get reasonable looking results, I think? (If you want to use an
> absolute tolerance, you just specify ref=1, tol=abs_tol).
>

way too much thought required there -- see my version.


An alternative thought: rather than a single "is_close" function, maybe it
> would make sense for is_close to always be relative,
>

yes, that came up...

If you had a sequence of numbers and wanted to do both relative comparisons
> (first n significant digits match) and absolute comparisons you'd just have
> to say:
>
>   for a in nums:
>      assert is_close(a, b) or is_close_abs(a, b)
>
> ?which doesn't seem that onerous.?
>

maybe not, but still a bit more onerous than:

for a in nums:
     assert is_close(a, b, abs_tol= 1e-100)

(and note that your is_close_abs() would require a tolerance value.

But more to the point, if you want to wrap that up in a function (which I'm
hoping someone will do for unittest), then that function would need the
relative an absolute tolerance levels anyway, so have a different API, less
than ideal.

-Chris

-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/150cd9f6/attachment-0001.html>

From steve at pearwood.info  Thu Feb 12 18:05:30 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 13 Feb 2015 04:05:30 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAN1YFWt4C6S-8ST-=7Z35iXu827p1Yjk4+FaMyK5ZvDmFastUA@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <CAN1YFWt4C6S-8ST-=7Z35iXu827p1Yjk4+FaMyK5ZvDmFastUA@mail.gmail.com>
Message-ID: <20150212170530.GE2498@ando.pearwood.info>

On Thu, Feb 12, 2015 at 09:02:04AM -0430, Juancarlo A?ez wrote:
> On Thu, Feb 12, 2015 at 6:13 AM, Steven D'Aprano <steve at pearwood.info>
> wrote:
> 
> > I certainly wouldn't want to write
> >
> >     new_dict = a + b + c + d
> >
> > and have O(N**2) performance when I can do this instead
> >
> 
> It would likely not be O(N), but O(N**2) seems exaggerated.

If you want to be pedantic, it's not exactly quadratic behaviour. But 
it would be about the same as as list and tuple repeated addition, 
which is also not exactly quadratic behaviour, but we often describe 
it as such. Repeated list addition is *much* slower than O(N):

py> from timeit import Timer
py> t = Timer("sum(([1] for i in range(100)), [])")  # add 100 lists
py> min(t.repeat(number=100))
0.010571401566267014
py> t = Timer("sum(([1] for i in range(1000)), [])")  # 10 times more
py> min(t.repeat(number=100))
0.4032209049910307


So increasing the number of lists being added by a factor of 10 leads to 
a factor of 40 increase in time taken. Increase it by a factor of 10 
again:

py> t = Timer("sum(([1] for i in range(10000)), [])")  # 10 times more
py> min(t.repeat(number=100)) # note the smaller number of trials
34.258460350334644

and we now have the time goes up by a factor not of 10, not of 40, but 
about 85. So the slowdown is getting worse. It's technically not as bad 
as quadratic behaviour, but its bad enough to justify the term.

Will repeated dict addition be like this? Since dicts are mutable, we 
can't use the string concatenation trick to optimize each operation, so 
I expect so: + will have to copy each of its arguments into a new dict, 
then the next + will copy its arguments into a new dict, and so on. 
Exactly what happens with list.

-- 
Steve

From stephen at xemacs.org  Thu Feb 12 18:52:18 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Fri, 13 Feb 2015 02:52:18 +0900
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
Message-ID: <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>

Chris Angelico writes:

 > I live in a house with a lot of people.

I live in a rabbit hutch that can barely hold 3 people, let alone a
week's groceries.  So do most of the natives here.

So when you go to Costco and buy eggs by the gross and pork by the
kilo, you take a bunch of friends, and add your shopping dicts
keywise.  Then you take all the loot to somebody's place, and split up
the orders in Tupperware and collect the money, have some green tea,
and then everybody goes home.

The proposal to have dicts add keywise has actually been made on this
list, and IIRC the last time is when collections.Counter was born.

I'm a definite -1 on "+" or "|" for dicts.  "+=" or "|=" I can live
with as alternative spellings for "update", but they're both pretty
bad, "+" because addition is way too overloaded (even in the shopping
list context) and I'd probably think it means different things in
different contexts, and "|" because the wrong operand wins in
"short-circuit" evaluation.

From carl at oddbird.net  Thu Feb 12 17:57:46 2015
From: carl at oddbird.net (Carl Meyer)
Date: Thu, 12 Feb 2015 09:57:46 -0700
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150212104310.GA2498@ando.pearwood.info>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
Message-ID: <54DCDB8A.9030602@oddbird.net>

On 02/12/2015 03:43 AM, Steven D'Aprano wrote:
> I think this is a feature that is more useful in theory than in 
> practice. Which we already have a way to do a merge in place, and a 
> copy-and-merge seems like it should be useful but I'm struggling to 
> think of any use-cases for it. I've never needed this, and I've never 
> seen anyone ask how to do this on the tutor or python-list mailing 
> lists.

I think the level of interest in
http://stackoverflow.com/questions/38987/how-can-i-merge-two-python-dictionaries-in-a-single-expression
(almost 1000 upvotes on the question alone) does indicate that the
desire for an expression form of merging dictionaries is not purely
theoretical.

Carl

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/b72cfc67/attachment.sig>

From marky1991 at gmail.com  Thu Feb 12 19:00:14 2015
From: marky1991 at gmail.com (Mark Young)
Date: Thu, 12 Feb 2015 13:00:14 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DCDB8A.9030602@oddbird.net>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info> <54DCDB8A.9030602@oddbird.net>
Message-ID: <CAG3cHaaUSQVPtR+w2vLzOp3+HP_sJ7GS1rdvs8P7rnqXxi5d4Q@mail.gmail.com>

Sure, it's something that people want to do. But then the question becomes
"How do you want to merge them?" and then the answers to that are all over
the board, to the point that there's no one obvious way to do it.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/3d5e16c4/attachment.html>

From donald at stufft.io  Thu Feb 12 19:12:26 2015
From: donald at stufft.io (Donald Stufft)
Date: Thu, 12 Feb 2015 13:12:26 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAG3cHaaUSQVPtR+w2vLzOp3+HP_sJ7GS1rdvs8P7rnqXxi5d4Q@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info> <54DCDB8A.9030602@oddbird.net>
 <CAG3cHaaUSQVPtR+w2vLzOp3+HP_sJ7GS1rdvs8P7rnqXxi5d4Q@mail.gmail.com>
Message-ID: <30399135-105B-4F1A-9AA6-F9B7DF292519@stufft.io>


> On Feb 12, 2015, at 1:00 PM, Mark Young <marky1991 at gmail.com> wrote:
> 
> Sure, it's something that people want to do. But then the question becomes "How do you want to merge them?" and then the answers to that are all over the board, to the point that there's no one obvious way to do it.

Honestly I don?t really think it is all that over the board. Basically in every method of merging that plain dictionaries currently offer it?s a ?last use of the key wins?. The only place this isn?t the case is specialized dictionary like classes like Counter. I think using anything other than ?last use of the key wins? would be inconsistent with the rest of Python and I think it?s honestly the only thing that can work generically too.

See:

>>> a = {1: True, 2: True}
>>> b = {2: False, 3: False}
>>> dict(list(a.items()) + list(b.items()))
{1: True, 2: False, 3: False}

And

>>> a = {1: True, 2: True}
>>> b = {2: False, 3: False}
>>> c = a.copy()
>>> c.update(b)
>>> c
{1: True, 2: False, 3: False}

And

>>> {1: True, 2: True, 2: False, 3: False}
{1: True, 2: False, 3: False}

And

>>> a = {"a": True, "b": True}
>>> b = {"b": False, "c": False}
>>> dict(a, **b)
{'b': False, 'c': False, 'a': True}

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/02f73193/attachment.sig>

From ethan at stoneleaf.us  Thu Feb 12 19:15:41 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Thu, 12 Feb 2015 10:15:41 -0800
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <54DCBF85.3060905@btinternet.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <54DB695F.2080502@sotecware.net>
 <CAG8k2+4o53qVbOzo4XmdchonFBTEfcWamByWfZ9bao0=XiHauw@mail.gmail.com>
 <54DBB65F.1080108@canterbury.ac.nz> <54DCBF85.3060905@btinternet.com>
Message-ID: <54DCEDCD.7080408@stoneleaf.us>

On 02/12/2015 06:58 AM, Rob Cliffe wrote:
> On 11/02/2015 20:06, Greg Ewing wrote:
>> Daniel Holth wrote:
>>> How about zero characters between commas? In ES6 destructing
>>> assignment you'd writes something like ,,c = (1,2,3)
>>
>> Wouldn't play well with the syntax for 1-element tuples:
>>
>>    c, = stuff
>>
>> Is len(stuff) expected to be 1 or 2?
>
> Does that matter?

Yes, it matters.  I have code that had better raise an exception if 'stuff' is not exactly one element.

> In either case the intent is to evaluate stuff and assign its first element to c.

No, its intent is to assign the only element to c, and raise if there are more, or fewer, elements.

> We could have a rule that when the left hand side of an assignment is a tuple that ends in a final comma then unpacking
> stops at the final comma, and it doesn't matter whether there are any more values to unpack.  (Perhaps multiple final
> commas could mean "unpack until the last comma then stop, so
>     c,,, = stuff

a) That's ugly.
b) Special cases aren't special enough to break the rules.

> would mean "raise an exception if len(stuff)<3; otherwise assign the first element of stuff to c".)
> It could even work for infinite generators!

Uh, you're kidding, right?

> But making something "work" instead of raising an exception is likely to break much less code that changing/stopping
> something from "working".

It would break mine, so no thanks.

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/e47dab8d/attachment-0001.sig>

From thomas at kluyver.me.uk  Thu Feb 12 19:21:29 2015
From: thomas at kluyver.me.uk (Thomas Kluyver)
Date: Thu, 12 Feb 2015 10:21:29 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>

On 12 February 2015 at 09:52, Stephen J. Turnbull <stephen at xemacs.org>
wrote:

> I'm a definite -1 on "+" or "|" for dicts.  "+=" or "|=" I can live
> with as alternative spellings for "update", but they're both pretty
> bad, "+" because addition is way too overloaded (even in the shopping
> list context) and I'd probably think it means different things in
> different contexts, and "|" because the wrong operand wins in
> "short-circuit" evaluation.
>

Would this be less controversial if it were a new method rather than an odd
use of an operator? I get the impression that most of the objections are
especially against using an operator.

While I think the operator is elegant, I would be almost as happy with a
method, e.g. updated(). It also makes the semantics very clear when e.g.
Counter objects are involved:

new = a.updated(b)
# equivalent to
new = a.copy()
new.update(b)

It would also be nice to be able to pass multiple dictionaries like
a.updated(b, c), to avoid repeated copying in a.updated(b).updated(c).

Or perhaps even a classmethod:

dict.merged(a, b, c)

Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/d86477a8/attachment.html>

From ethan at stoneleaf.us  Thu Feb 12 19:27:54 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Thu, 12 Feb 2015 10:27:54 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
Message-ID: <54DCF0AA.9050005@stoneleaf.us>

On 02/12/2015 10:21 AM, Thomas Kluyver wrote:

> Would this be less controversial if it were a new method rather than an odd use of an operator? I get the impression
> that most of the objections are especially against using an operator.
> 
> While I think the operator is elegant, I would be almost as happy with a method, e.g. updated(). It also makes the
> semantics very clear when e.g. Counter objects are involved:
> 
> new = a.updated(b)
> # equivalent to
> new = a.copy()
> new.update(b)

If we go with "updated" then it should be a separate function, like "sorted" and "reveresed" are.

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/f507b629/attachment.sig>

From thomas at kluyver.me.uk  Thu Feb 12 19:35:23 2015
From: thomas at kluyver.me.uk (Thomas Kluyver)
Date: Thu, 12 Feb 2015 10:35:23 -0800
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <CAK9R32TLOj=bEmSqxCrS_FmbUPFX276JEfbg6Djmte32KpSBHA@mail.gmail.com>
References: <CAK9R32TLOj=bEmSqxCrS_FmbUPFX276JEfbg6Djmte32KpSBHA@mail.gmail.com>
Message-ID: <CAOvn4qjowKYPRHQ10zN-vCT6KPcX16Lbo8y04X3YT-N7+xGgsg@mail.gmail.com>

On 12 February 2015 at 05:42, Martin Teichmann <lkb.teichmann at gmail.com>
wrote:

> With my addition, one could simply add a method
> to MetaRegistrar:
>
> def __add__(self, other):
>     class Combination(self, other):
>         pass
>     return Combination
>

Having recently had to refactor some code in IPython which used multiple
inheritance and metaclasses, -1 to this idea.

1. More magic around combining metaclasses would make it harder to follow
what's going on. Explicit (this has a metaclass which is a combination of
two other metaclasses) is better than implicit (this has a metaclass which
can combine itself with other metaclasses found from the MRO).
2. I *want* it to be hard for people to write multiple-inheriting code with
multiple metaclasses. That code is most likely going to be a nightmare for
anyone else to understand, so I want the person creating it to stop and
think if they can do it a different way, like using composition instead of
inheritance.

Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/5339c43d/attachment.html>

From ethan at stoneleaf.us  Thu Feb 12 19:36:28 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Thu, 12 Feb 2015 10:36:28 -0800
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <CAK9R32TLOj=bEmSqxCrS_FmbUPFX276JEfbg6Djmte32KpSBHA@mail.gmail.com>
References: <CAK9R32TLOj=bEmSqxCrS_FmbUPFX276JEfbg6Djmte32KpSBHA@mail.gmail.com>
Message-ID: <54DCF2AC.4020104@stoneleaf.us>

On 02/12/2015 05:42 AM, Martin Teichmann wrote:

> class MetaQRegistrar(MetaRegistrar, type(QObject)):
>     pass
> 
> but then you always have to write something like
> 
> class Spam(Registrar, QObject, metaclass=MetaQRegistrar):
>     pass

or:

  class RQObject(metaclass=MetaQRegistrar):
      pass

  class Spam(RQObject):
      pass

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/8f886338/attachment.sig>

From steve at pearwood.info  Thu Feb 12 19:42:34 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 13 Feb 2015 05:42:34 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
Message-ID: <20150212184234.GF2498@ando.pearwood.info>

On Fri, Feb 13, 2015 at 03:27:15AM +1100, Chris Angelico wrote:
> On Fri, Feb 13, 2015 at 3:21 AM, Steven D'Aprano <steve at pearwood.info> wrote:
> > On Fri, Feb 13, 2015 at 02:46:45AM +1100, Chris Angelico wrote:
> >
> >> Imagine if two people independently build shopping lists
> >
> > Sorry, was that shopping *lists* or shopping *dicts*?
> 
> I've never heard anyone in the real world talk about "shopping dicts"
> or "shopping arrays" or "shopping collections.Sequences". It's always
> "shopping lists". The fact that I might choose to represent one with a
> dict is beside the point. :)

Sounds like you're copying Lua then :-)


> >> - "this is the stuff we need" - and
> >> then combine them.
> >
> > Why would you discard duplicates? If you need 2 loaves of bread, and I
> > need 1 loaf of bread, and the merged shopping list has anything less
> > than 3 loaves of bread, one of us is going to miss out.
> >
> 
> I live in a house with a lot of people. When I was younger, Mum used
> to do most of the shopping, and she'd keep an eye on the fridge and
> usually know what needed to be replenished; but some things she didn't
> monitor, and relied on someone else to check them. If there's some
> overlap in who checks what, we're all going to notice the same need -
> we don't have separate requirements here.

But why is Angelico household shopping list model more appropriate for 
dict addition than the "last value applied wins" model used by 
dict.update? Or the multiset model, where the maximum value wins? Or the 
Counter model where values are added?

One more for luck: the *first* value applied wins, rather than the last. 
You can add new keys that aren't already set, but you cannot change the 
value of existing keys. (You can add new items to Mum's shopping list, 
but if she says 2 loaves of bread, then 2 loaves it is.)


> One person notes that we're
> down to our last few eggs and should buy another dozen; another person
> also notes that we're down to our last few eggs, but thinks we should
> probably get two dozen. Getting *three* dozen is definitely wrong
> here.

Fred is making a quiche and needs a dozen eggs. (Its a big quiche.) 
Wilma is baking some cakes and needs two dozen eggs. If you get less 
than three dozen, someone is going to miss out.

The point I am making is that there are *multiple ways* to "add" two 
dicts, and no one model is best in all circumstances.

Sometimes reading python-ideas is like deja vu all over again. Each time 
this issue gets raised, people seem to be convinced that there is an 
obvious and sensible meaning for dict merging. And they are right. If 
only we could all agree on *which* obvious and sensible meaning.

Which is another way of saying that there is no one obviously correct 
and generally useful way to merge two dicts. Well, actually there is 
one, and we already have the dict.update method to do it. If + 
duplicates that, it's redundant, and if it doesn't, it's non-obvious 
and likely to be of more limited use.

Providing any of these alternate merging behaviours in yur own code is 
very simple, a small helper function of just a few lines will do the 
trick.



-- 
Steve

From steve at pearwood.info  Thu Feb 12 19:43:32 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 13 Feb 2015 05:43:32 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DCDB8A.9030602@oddbird.net>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info> <54DCDB8A.9030602@oddbird.net>
Message-ID: <20150212184332.GG2498@ando.pearwood.info>

On Thu, Feb 12, 2015 at 09:57:46AM -0700, Carl Meyer wrote:
> On 02/12/2015 03:43 AM, Steven D'Aprano wrote:
> > I think this is a feature that is more useful in theory than in 
> > practice. Which we already have a way to do a merge in place, and a 
> > copy-and-merge seems like it should be useful but I'm struggling to 
> > think of any use-cases for it. I've never needed this, and I've never 
> > seen anyone ask how to do this on the tutor or python-list mailing 
> > lists.
> 
> I think the level of interest in
> http://stackoverflow.com/questions/38987/how-can-i-merge-two-python-dictionaries-in-a-single-expression
> (almost 1000 upvotes on the question alone) does indicate that the
> desire for an expression form of merging dictionaries is not purely
> theoretical.

I'm going to quote Raymond Hettinger, from a recent discussion here:

[quote]
I wouldn't read too much in the point score on the StackOverflow 
question.  First, the question is very old [...] Second, StackOverflow 
tends to award high scores to the simplest questions and answers [...] A 
high score indicates interest but not a need to change the language. A 
much better indicator would be the frequent appearance of ordered sets 
in real-world code or high download statistics from PyPI or
ActiveState's ASPN cookbook.
[end quote]

In this case, we're not talking about ordered sets. We're talking about 
something which is may be as little as two lines of code, and can even 
be squeezed into a single-line function:

def merge(a, b):  d = a.copy(); d.update(b); return d

(but don't do that). StackOverflow's model is designed to encourage 
people to vote on questions as much as possible, and naturally people 
tend to vote more for simple questions that they understand rather than 
hard questions that require a lot of in-depth knowledge or analysis. I 
agree with Raymond that high votes are not in and of themselves evidence 
of frequent need.


-- 
Steve

From thomas at kluyver.me.uk  Thu Feb 12 19:49:49 2015
From: thomas at kluyver.me.uk (Thomas Kluyver)
Date: Thu, 12 Feb 2015 10:49:49 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DCF0AA.9050005@stoneleaf.us>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <54DCF0AA.9050005@stoneleaf.us>
Message-ID: <CAOvn4qjfGVqr+-W7BZ_33nN9_wph2-xTu6Be=devst10=QBw7A@mail.gmail.com>

On 12 February 2015 at 10:27, Ethan Furman <ethan at stoneleaf.us> wrote:

> If we go with "updated" then it should be a separate function, like
> "sorted" and "reveresed" are.


Arguments that it should be a method on mappings:

1. It's less generally applicable than sorted() or reversed(), which work
for any finite iterable.
2. As a method on the object, it's very clear how it works with different
types - e.g. Counter.updated() uses Counter.update(). This is less clear if
it's based on the first argument to a function.
2a. You can do dict.update(counter_object, foo) to use the dict method on a
Counter instance.
3. updated() as a standalone name is not very obvious - people think of how
to 'combine' or 'merge' dicts, perhaps 'union' if they're of a mathematical
bent. The symmetry between dict.update() and dict.updated() is clearer.
4. Set operations are defined with methods on set objects, not standalone
functions. This seems roughly analogous.

Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/873c64af/attachment.html>

From python at mrabarnett.plus.com  Thu Feb 12 19:55:23 2015
From: python at mrabarnett.plus.com (MRAB)
Date: Thu, 12 Feb 2015 18:55:23 +0000
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
Message-ID: <54DCF71B.9050900@mrabarnett.plus.com>

On 2015-02-12 16:27, Chris Angelico wrote:
> On Fri, Feb 13, 2015 at 3:21 AM, Steven D'Aprano <steve at pearwood.info> wrote:
>> On Fri, Feb 13, 2015 at 02:46:45AM +1100, Chris Angelico wrote:
>>
>>> Imagine if two people independently build shopping lists
>>
>> Sorry, was that shopping *lists* or shopping *dicts*?
>
> I've never heard anyone in the real world talk about "shopping dicts"
> or "shopping arrays" or "shopping collections.Sequences". It's always
> "shopping lists". The fact that I might choose to represent one with a
> dict is beside the point. :)
>
>>> - "this is the stuff we need" - and
>>> then combine them.
>>
>> Why would you discard duplicates? If you need 2 loaves of bread, and I
>> need 1 loaf of bread, and the merged shopping list has anything less
>> than 3 loaves of bread, one of us is going to miss out.
>>
>
> I live in a house with a lot of people. When I was younger, Mum used
> to do most of the shopping, and she'd keep an eye on the fridge and
> usually know what needed to be replenished; but some things she didn't
> monitor, and relied on someone else to check them. If there's some
> overlap in who checks what, we're all going to notice the same need -
> we don't have separate requirements here. One person notes that we're
> down to our last few eggs and should buy another dozen; another person
> also notes that we're down to our last few eggs, but thinks we should
> probably get two dozen. Getting *three* dozen is definitely wrong
> here. And definitely her views on how much we should buy would trump
> my own, as my experience was fairly minimal.
>
What if X wants a pizza and Y wants a pizza? If you get only one,
someone is going to be unhappy!

> (These days, most of us are adult, and matters are a bit more
> complicated. So I'm recalling "the simpler days of my youth" for the
> example.)
>


From donald at stufft.io  Thu Feb 12 20:10:40 2015
From: donald at stufft.io (Donald Stufft)
Date: Thu, 12 Feb 2015 14:10:40 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150212184234.GF2498@ando.pearwood.info>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <20150212184234.GF2498@ando.pearwood.info>
Message-ID: <C737922B-4E22-44A9-905D-B50E9E1A5204@stufft.io>


> On Feb 12, 2015, at 1:42 PM, Steven D'Aprano <steve at pearwood.info> wrote:
> 
> Which is another way of saying that there is no one obviously correct
> and generally useful way to merge two dicts. Well, actually there is
> one, and we already have the dict.update method to do it. If +
> duplicates that, it's redundant, and if it doesn't, it's non-obvious
> and likely to be of more limited use.

The fact that we already have dict.update indicates to me that the way
dict.update works is the sensible way for + and += to work. I mean by your
logic why do we have + and += for lists? People could just use copy() and
extend() if they wanted to.

Wanting to add two dictionaries is a fairly common desire, both with and
without copying. If it weren't then it wouldn't pop up with some regularity
on python-ideas. The real questionis what semantics do you give it, which I
think is a fairly silly question because we already have the semantics defined
via dict.update() and the dictionary literal and the dict() constructor itself.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/ff64c7d3/attachment.sig>

From p.f.moore at gmail.com  Thu Feb 12 20:14:00 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Thu, 12 Feb 2015 19:14:00 +0000
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150212184332.GG2498@ando.pearwood.info>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
Message-ID: <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>

On 12 February 2015 at 18:43, Steven D'Aprano <steve at pearwood.info> wrote:
> I'm going to quote Raymond Hettinger, from a recent discussion here:
>
> [quote]
> I wouldn't read too much in the point score on the StackOverflow
> question.  First, the question is very old [...] Second, StackOverflow
> tends to award high scores to the simplest questions and answers [...] A
> high score indicates interest but not a need to change the language. A
> much better indicator would be the frequent appearance of ordered sets
> in real-world code or high download statistics from PyPI or
> ActiveState's ASPN cookbook.
> [end quote]

I'm surprised that this hasn't been mentioned yet in this thread, but
why not create a function, put it on PyPI and collect usage stats from
your users? If they are high enough, a case is made. If no-one
downloads it because writing your own is easier than adding a
dependency, that probably applies to adding it to the core as well
(the "new dependency" then would be not supporting Python <3.5 (or
whatever)). Converting a 3rd party function to a method on the dict
class isn't particularly hard, so I'm not inclined to buy the idea
that people won't use it *purely* because it's a function rather than
a method, which is the only thing a 3rd party package can't do.

I don't think it's a good use for an operator (neither + nor | seem
particularly obvious fits to me, the set union analogy
notwithstanding).

Paul

From breamoreboy at yahoo.co.uk  Thu Feb 12 20:14:35 2015
From: breamoreboy at yahoo.co.uk (Mark Lawrence)
Date: Thu, 12 Feb 2015 19:14:35 +0000
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150212184234.GF2498@ando.pearwood.info>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <20150212184234.GF2498@ando.pearwood.info>
Message-ID: <mbiu32$fqr$1@ger.gmane.org>

On 12/02/2015 18:42, Steven D'Aprano wrote:
>
> Sometimes reading python-ideas is like deja vu all over again. Each time
> this issue gets raised, people seem to be convinced that there is an
> obvious and sensible meaning for dict merging. And they are right. If
> only we could all agree on *which* obvious and sensible meaning.
>

To me it's blatantly obvious and I simply don't understand why this is 
endlessly debated.  You take the first item from the rhs, the second 
item from the lhs, the third item from the rhs...

-- 
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence


From p.f.moore at gmail.com  Thu Feb 12 20:19:22 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Thu, 12 Feb 2015 19:19:22 +0000
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <mbiu32$fqr$1@ger.gmane.org>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <20150212184234.GF2498@ando.pearwood.info>
 <mbiu32$fqr$1@ger.gmane.org>
Message-ID: <CACac1F-Q5E5P1Hu6GmZ+M+Pkwx6JHCQNpqMD8bEbH+BxOvDVNw@mail.gmail.com>

On 12 February 2015 at 19:14, Mark Lawrence <breamoreboy at yahoo.co.uk> wrote:
> To me it's blatantly obvious and I simply don't understand why this is
> endlessly debated.  You take the first item from the rhs, the second item
> from the lhs, the third item from the rhs...

Unless you're left handed when you start with the lhs, obviously.
Paul

From marky1991 at gmail.com  Thu Feb 12 20:21:28 2015
From: marky1991 at gmail.com (Mark Young)
Date: Thu, 12 Feb 2015 14:21:28 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <C737922B-4E22-44A9-905D-B50E9E1A5204@stufft.io>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <20150212184234.GF2498@ando.pearwood.info>
 <C737922B-4E22-44A9-905D-B50E9E1A5204@stufft.io>
Message-ID: <CAG3cHabpmytfquO5bwV8Bsew8JK39qpFx=y=ayNnB5s90Dmogw@mail.gmail.com>

Addition and updating are completely different things. An update implies
that the second value is more up-to-date or more correct, so its values
should be preferred in the event of a clash. Addition has no such
implication, so appealing for consistency between the two doesn't make any
sense to me.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/b985f275/attachment.html>

From thomas at kluyver.me.uk  Thu Feb 12 20:22:46 2015
From: thomas at kluyver.me.uk (Thomas Kluyver)
Date: Thu, 12 Feb 2015 11:22:46 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info> <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
Message-ID: <CAOvn4qgxLy+Kj1v-GM_cJ-iyLXAskhKoK1h8_C1B9N0p5hjT5Q@mail.gmail.com>

On 12 February 2015 at 11:14, Paul Moore <p.f.moore at gmail.com> wrote:

> (the "new dependency" then would be not supporting Python <3.5 (or
> whatever))
>

I've seen this argument a few times on python-ideas, and I don't understand
it. By this rationale, there's little point ever adding any new feature to
Python, because people who need to support older versions can't use it.

By posting on python-ideas, you're implicitly prepared to take the long
term view - these are ideas that we might be able to really use in a few
years. One day, relying on Python 3.5 or above will be completely
reasonable, and then we can take advantage of the things we're adding to it
now. It's like an investment in code.

> why not create a function, put it on PyPI and collect usage stats

Does anyone seriously believe that people will add a dependency which
contains a single three line function?

Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/7b3f8576/attachment.html>

From donald at stufft.io  Thu Feb 12 20:27:38 2015
From: donald at stufft.io (Donald Stufft)
Date: Thu, 12 Feb 2015 14:27:38 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info> <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
Message-ID: <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>


> On Feb 12, 2015, at 2:14 PM, Paul Moore <p.f.moore at gmail.com> wrote:
> 
> On 12 February 2015 at 18:43, Steven D'Aprano <steve at pearwood.info> wrote:
>> I'm going to quote Raymond Hettinger, from a recent discussion here:
>> 
>> [quote]
>> I wouldn't read too much in the point score on the StackOverflow
>> question.  First, the question is very old [...] Second, StackOverflow
>> tends to award high scores to the simplest questions and answers [...] A
>> high score indicates interest but not a need to change the language. A
>> much better indicator would be the frequent appearance of ordered sets
>> in real-world code or high download statistics from PyPI or
>> ActiveState's ASPN cookbook.
>> [end quote]
> 
> I'm surprised that this hasn't been mentioned yet in this thread, but
> why not create a function, put it on PyPI and collect usage stats from
> your users? If they are high enough, a case is made. If no-one
> downloads it because writing your own is easier than adding a
> dependency, that probably applies to adding it to the core as well
> (the "new dependency" then would be not supporting Python <3.5 (or
> whatever)). Converting a 3rd party function to a method on the dict
> class isn't particularly hard, so I'm not inclined to buy the idea
> that people won't use it *purely* because it's a function rather than
> a method, which is the only thing a 3rd party package can't do.
> 
> I don't think it's a good use for an operator (neither + nor | seem
> particularly obvious fits to me, the set union analogy
> notwithstanding).

Honestly I don?t think "put it on PyPI" is a very useful thing in cases like
these. Consuming a dependency from PyPI has a cost, such the same as dropping
support for some version of Python has a cost. For something with an easy
enough work around the cost is unlikely to be low enough for people to be
willing to pay it.

Python often gets little improvements that on their own are not major
enhancements but when looked at cumulatively they add up to a nicer to use
language. An example would be set literals, set(["a", "b"]) wasn't confusing
nor was it particularly hard to use, however being able to type {"a", "b"} is
nice, slightly easier, and just makes the language jsut a little bit better.

Similarly doing:

    new_dict = dict1.copy()
    new_dict.update(dict2)

Isn't confusing or particularly hard to use, however being able to type that
as new_dict = dict1 + dict2 is more succinct, cleaner, and just a little bit
nicer. It adds another small reason why, taken with the other small reasons,
someone might want to drop an older version of Python for a newer version.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/da58b0b8/attachment-0001.sig>

From pmawhorter at gmail.com  Thu Feb 12 21:01:58 2015
From: pmawhorter at gmail.com (Peter Mawhorter)
Date: Thu, 12 Feb 2015 12:01:58 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info> <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
Message-ID: <CABaZ=CFz0_CMXGUETkdc8PE2vLQe-rim+DWPBh3ZJ41yFt_H3Q@mail.gmail.com>

>
> Honestly I don?t think "put it on PyPI" is a very useful thing in cases
> like
> these. Consuming a dependency from PyPI has a cost, such the same as
> dropping
> support for some version of Python has a cost. For something with an easy
> enough work around the cost is unlikely to be low enough for people to be
> willing to pay it.
>
> Python often gets little improvements that on their own are not major
> enhancements but when looked at cumulatively they add up to a nicer to use
> language. An example would be set literals, set(["a", "b"]) wasn't
> confusing
> nor was it particularly hard to use, however being able to type {"a", "b"}
> is
> nice, slightly easier, and just makes the language jsut a little bit
> better.
>
> Similarly doing:
>
>     new_dict = dict1.copy()
>     new_dict.update(dict2)
>
> Isn't confusing or particularly hard to use, however being able to type
> that
> as new_dict = dict1 + dict2 is more succinct, cleaner, and just a little
> bit
> nicer. It adds another small reason why, taken with the other small
> reasons,
> someone might want to drop an older version of Python for a newer version.
>
> ---
> Donald Stufft
> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
>
>
I'd expect the major constituency that would use an operator for this
(especially over a function) is not well-represented on this list: students
and newer users of python who just want to add two dictionaries together
(they're probably also the ones upvoting that StackOverflow question).
These people aren't going to download anything from PyPI; they probably
don't know that PyPI exists, as I once didn't.

I don't think anyone seriously supports this proposal as a time-saver or
code-elegancer for developers like ourselves: people who are Python experts
and who develop large projects in Python. As demonstrated several times in
this thread, most of us probably already have our own libraries of utility
functions, and adding another one- or three-liner to these is easy for us.
Adding this feature makes a much bigger difference for someone who's
learning to program using Python, or who already knows some programming but
is picking up Python as an additional langauge.

I see both positives and negatives to the proposal from this point of view.
The positive would be ease of use. Python prides itself on being intuitive,
and it really is much more intuitive in its basic everyday operation than
other widespread languages like Java, in large part (IMO) because of its
easy-to-use built-in default data structures. When I was learning Python,
having already learned some C and Java, the idea of built-in list and
dictionary data structures was an enormous relief. No more trying to
remember the various methods of HashMap or ArrayList or worrying about
which one I should be using where: these basic tools of programming just
worked, and worked the way I'd expect, using very simple syntax. That was a
big draw of the language, and I think the argument can be made here that
adding a '+' operator for dictionaries adds to this quality of the language.

{ "a": 1, "b":2 } + { "c": 3}

is the kind of thing that I'd try to see if it worked *before* checking the
docs, and I'd be pleased if it did.

That said, there's a negative here as well. The most obvious update
procedure to me (others can argue about this if they wish) is overwriting
keys from the lhs when a duplicate exists in the rhs. I think a big reason
it seems obvious to me is that this is what I'd do if I had to implement
such a function myself. But whatever resolution you have, it's going to
catch people off-guard and introduce some low-visibility bugs into programs
(especially the programs of the novices who I think are a big target
audience of the idea). Of course, you could go with raising an error, but I
don't think that's very productive, as it forces people to put a try/catch
everywhere they'd use '+' which defeats the purpose of a simpler syntax in
the first place.

Overall I guess I'm +0 on the proposal.
Most useful for novices who just want to add two dictionaries together
already, but also potentially dangerous/frustrating for that same user-base.
A version that raised an exception when there was a key collision would be
pretty useful for novices, but completely useless for experience
programmers, as the overhead of a try/catch greatly outweighs the syntactic
benefit of an operator.

-Peter Mawhorter
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/711c5da4/attachment.html>

From p.f.moore at gmail.com  Thu Feb 12 21:15:10 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Thu, 12 Feb 2015 20:15:10 +0000
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAOvn4qgxLy+Kj1v-GM_cJ-iyLXAskhKoK1h8_C1B9N0p5hjT5Q@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <CAOvn4qgxLy+Kj1v-GM_cJ-iyLXAskhKoK1h8_C1B9N0p5hjT5Q@mail.gmail.com>
Message-ID: <CACac1F8TJ4OxCJF3ODAkFsfQu_g37ED-CxB7dna0=tpeGG=usQ@mail.gmail.com>

On 12 February 2015 at 19:22, Thomas Kluyver <thomas at kluyver.me.uk> wrote:
> On 12 February 2015 at 11:14, Paul Moore <p.f.moore at gmail.com> wrote:
>>
>> (the "new dependency" then would be not supporting Python <3.5 (or
>> whatever))
>
> I've seen this argument a few times on python-ideas, and I don't understand
> it. By this rationale, there's little point ever adding any new feature to
> Python, because people who need to support older versions can't use it.

To me, it means that anything that gets added to Python should be
significant enough to be worth waiting for.

> By posting on python-ideas, you're implicitly prepared to take the long term
> view - these are ideas that we might be able to really use in a few years.
> One day, relying on Python 3.5 or above will be completely reasonable, and
> then we can take advantage of the things we're adding to it now. It's like
> an investment in code.

That's a fair point. You do make me think - when looking at new
features, part of the long term view is to look at making sure a
proposal is clean and well thought through, so that it's something
people will be looking forward to, not merely something "good enough"
to provide a feature.

Evaluating the various proposals we've seen on that basis (and I
concede this is purely my personal feeling):

1. + and += on dictionaries. Looking far enough into the future, I can
see a time when people will see these as obvious and natural analogues
of list and string concatenation. There's a shorter-term problem with
the transition, particularly the fact that a lot of people currently
don't find the "last addition wins" behavior obvious, and the
performance characteristics of repeated addition (with copying). I
could see dict1 + dict2 being seen as a badly-performing anti-pattern,
much like string +, perfectly OK for simple cases but don't use it if
performance matters. And performance is more likely to matter with
dicts than strings.

2. | and |=. I can't imagine ever liking these. I don't like them for
sets, either. Personal opinion certainly.

3. A dict method. Not sure. It depends strongly on the name chosen.
And although it's not a strict rule (sets in particular break it) I
tend to think of methods as being mutating, not creating new copies.

4. An updated() builtin. Feels clean and has a nice parallel with
sorted(). But it's a common word that is often used as a variable, and
I don't think (long term view again!) that a pattern of proliferating
builtins as non-mutating versions of methods is a good one to
encourage.

>> why not create a function, put it on PyPI and collect usage stats
>
> Does anyone seriously believe that people will add a dependency which
> contains a single three line function?

I'm sorry, that was a snarky suggestion. (Actually my whole post was
pretty snarky. Hopefully, this post is fairer.)

I do think this is a reasonable situation to invoke the principle that
not every 3-line code snippet deserves to be a builtin. Steven posted
a pretty simple updated() function that projects can use, and I don't
really understand why people can sometimes be so reluctant to include
such utilities in their code.

Anyway, overall I still remain unconvinced that this is worth adding,
but hopefully the above explains my thinking better.

Sorry again for the tone of my previous response.
Paul

From p.f.moore at gmail.com  Thu Feb 12 21:19:02 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Thu, 12 Feb 2015 20:19:02 +0000
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
Message-ID: <CACac1F_9v9tF-saVD_MHsrGfWRx6fyxxc+o2tff5HXRob5hxKQ@mail.gmail.com>

On 12 February 2015 at 19:27, Donald Stufft <donald at stufft.io> wrote:
> Honestly I don?t think "put it on PyPI" is a very useful thing in cases like
> these. Consuming a dependency from PyPI has a cost, such the same as dropping
> support for some version of Python has a cost. For something with an easy
> enough work around the cost is unlikely to be low enough for people to be
> willing to pay it.
>
> Python often gets little improvements that on their own are not major
> enhancements but when looked at cumulatively they add up to a nicer to use
> language. An example would be set literals, set(["a", "b"]) wasn't confusing
> nor was it particularly hard to use, however being able to type {"a", "b"} is
> nice, slightly easier, and just makes the language jsut a little bit better.
>
> Similarly doing:
>
>     new_dict = dict1.copy()
>     new_dict.update(dict2)
>
> Isn't confusing or particularly hard to use, however being able to type that
> as new_dict = dict1 + dict2 is more succinct, cleaner, and just a little bit
> nicer. It adds another small reason why, taken with the other small reasons,
> someone might want to drop an older version of Python for a newer version.

Well put. My previous post was uncalled for.

I'm still not sure I agree that the proposal is worth it, but this is
a good argument in favour (and confirms my feeling that *if* it were
to happen, spelling it as addition is the right way to do so).

Paul

From antony.lee at berkeley.edu  Thu Feb 12 21:22:55 2015
From: antony.lee at berkeley.edu (Antony Lee)
Date: Thu, 12 Feb 2015 12:22:55 -0800
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <54DCF2AC.4020104@stoneleaf.us>
References: <CAK9R32TLOj=bEmSqxCrS_FmbUPFX276JEfbg6Djmte32KpSBHA@mail.gmail.com>
 <54DCF2AC.4020104@stoneleaf.us>
Message-ID: <CAGRr6BEdLeoyH7QK=e4pYz8nNvxpKBHuyjBwVoh3CnHUAWd8yg@mail.gmail.com>

Create the required inheriting metaclass on the fly:

def multi_meta(*clss):
    """A helper for inheriting from classes with different metaclasses.

    Usage:
        class C(multi_meta(A, B)):
            "A class inheriting from classes A, B with different
metaclasses."
    """
    mcss = tuple(type(cls) for cls in clss)
    mcs = type(
        "Meta[{}]".format(", ".join(mcs.__name__ for mcs in mcss)), mcss,
{})
    return mcs("[{}]".format(", ".join(cls.__name__ for cls in clss)),
clss, {})

I have the same use-case as you, i.e. mixing custom metaclasses with
QObjects.

2015-02-12 10:36 GMT-08:00 Ethan Furman <ethan at stoneleaf.us>:

> On 02/12/2015 05:42 AM, Martin Teichmann wrote:
>
> > class MetaQRegistrar(MetaRegistrar, type(QObject)):
> >     pass
> >
> > but then you always have to write something like
> >
> > class Spam(Registrar, QObject, metaclass=MetaQRegistrar):
> >     pass
>
> or:
>
>   class RQObject(metaclass=MetaQRegistrar):
>       pass
>
>   class Spam(RQObject):
>       pass
>
> --
> ~Ethan~
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/ef33d8da/attachment.html>

From greg.ewing at canterbury.ac.nz  Thu Feb 12 21:57:26 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 13 Feb 2015 09:57:26 +1300
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <54DCA5F6.7080500@egenix.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <54DC6B90.4090004@egenix.com>
 <68019871-BE1D-4E20-8672-D2BC6CA2EB88@wiggy.net> <54DCA5F6.7080500@egenix.com>
Message-ID: <54DD13B6.5070104@canterbury.ac.nz>

M.-A. Lemburg wrote:
> _(...) is a kludge which was invented because languages don't address
> the need at the language level.

And monkeypatching _ into the builtins was a further
kludge invented because... well, I don't really know
why. Because people were too lazy to import it
themselves?

I think that was a really bad idea. If it hadn't been
done, we would be free to provide a different way
that doesn't implicitly clobber the interactive _
without it affecting existing code.

-- 
Greg

From greg.ewing at canterbury.ac.nz  Thu Feb 12 22:11:55 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 13 Feb 2015 10:11:55 +1300
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <1423752411.2031137.226666513.33573474@webmail.messagingengine.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC26C9.4060801@canterbury.ac.nz>
 <1423752411.2031137.226666513.33573474@webmail.messagingengine.com>
Message-ID: <54DD171B.7080802@canterbury.ac.nz>

random832 at fastmail.us wrote:
> On Wed, Feb 11, 2015, at 23:06, Greg wrote:
>>
>>Dict addition could be made associative by raising an
>>exception on duplicate keys.
> 
> Why isn't it associative otherwise?

I wasn't thinking straight yesterday. It is of course
associative under left-operand-wins o right-operand-wins
also.

The OP was right that commutativity is the issue. It's
true that there are many non-commutative operations used
in mathematics, but there is usually some kind of symmetry
about them nonetheless. They're not biased towards one
operand or the other.

For example, sequence concatenation obeys the relation

    a + b == reversed(reversed(b) + reversed(a))

and matrix multiplication obeys

    A * B == transpose(transpose(B) * transpose(A))

There would be no such identity for dict addition under
a rule that favours one operand over the other, which
makes it seem like a bad idea to me to use an operator
such as + or | that is normally expected to be symmetrical.

-- 
Greg

From greg.ewing at canterbury.ac.nz  Thu Feb 12 22:24:27 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 13 Feb 2015 10:24:27 +1300
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
Message-ID: <54DD1A0B.9000202@canterbury.ac.nz>

Chris Angelico wrote:
> we already know that programming isn't the same as
> mathematics.

True, but both mathematicians and programmers share a
desire to be able to reason easily and accurately about
the things they work with.

But my main objection to an asymmetrical + for dicts
is that it would be difficult to remember which way
it went.

Analogy with bytes + sequence would suggest that the
left operand wins. Analogy with dict.update would
suggest that the right operand winds.


  That's even true when we're working with numbers -
> usually because of representational limitations - but definitely so
> when analogies like "addition" are extended to other data types. But
> there are real-world parallels here. Imagine if two people
> independently build shopping lists - "this is the stuff we need" - and
> then combine them. You'd describe that as "your list and my list", or
> "your list plus my list" (but not "your list or my list"; English and
> maths tend to get "and" and "or" backward to each other in a lot of
> ways), and the resulting list would basically be set union of the
> originals. If you have room on one of them, you could go through the
> other and add all the entries that aren't already present; otherwise,
> you grab a fresh sheet of paper, and start merging the lists. (If
> you're smart, you'll group similar items together as you merge. That's
> where the analogy breaks down, though.) It makes fine sense to take
> two shopping lists and combine them into a new one, discarding any
> duplicates.
> 
> my_list = {"eggs": 1, "sugar": 1, "milk": 3, "bacon": 5}
> your_list = {"milk": 1, "eggs": 1, "sugar": 2, "coffee": 2}
> 
> (let's assume we know what units everything's in)
> 
> The "sum" of these two lists could possibly be defined as the greater
> of the two values, treating them as multisets; but if the assumption
> is that you keep track of most of what we need, and you're asking me
> if you've missed anything, then having your numbers trump mine makes
> fine sense.
> 
> I have no problem with the addition of dictionaries resulting in the
> set-union of their keys, and the values following them in whatever way
> makes the most reasonable sense. Fortunately Python actually has
> separate list and dict types, so we don't get the mess of PHP array
> merging...
> 
> ChrisA
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/


From greg.ewing at canterbury.ac.nz  Thu Feb 12 22:29:04 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 13 Feb 2015 10:29:04 +1300
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <5902460774982975660@unknownmsgid>
References: <5902460774982975660@unknownmsgid>
Message-ID: <54DD1B20.5030509@canterbury.ac.nz>

Chris Barker - NOAA Federal wrote:
> Yes, we have dict.update(), but the entire reason += exists (all the
> augmented assignment operators -- they are called that, yes?) is
> because we didn't want methods for everything.

But it's mean to be an in-place version of whatever the
+ operator does on that type. It doesn't make sense for
a type that doesn't have a + operator.

(And no, you can't argue that dict should be given a
+ operator just so you can give it a += operator!)

-- 
Greg

From python at 2sn.net  Thu Feb 12 22:30:09 2015
From: python at 2sn.net (Alexander Heger)
Date: Fri, 13 Feb 2015 08:30:09 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CACac1F_9v9tF-saVD_MHsrGfWRx6fyxxc+o2tff5HXRob5hxKQ@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
 <CACac1F_9v9tF-saVD_MHsrGfWRx6fyxxc+o2tff5HXRob5hxKQ@mail.gmail.com>
Message-ID: <CAN3CYHxsSxuUX+WXkc3jtdSugEFEFm_H39mgF73ddF-U9Ktexw@mail.gmail.com>

I had started a previous thread on this, asking why there was no
addition defined for dictionaries, along the line of what Peter argued
a novice would naively expect. ... some deja-vu on the discussion ...
yes, the arguments were then as now

1) whether to use + or |, other operators that suggest asymmetric
(non-commutative) operation were suggested as well
- in my opinion dictionaries are not sets, so the use of | to indicate
set-like behaviour brings no real advantage ... + seems the most
natural for a novice and is clear enough

2) obviously, there were the same questions whether in case of key
collisions the elements should be added - i.e., apply the + operator
to the elements; it had been argued this was another reason to use |
instead of +, however, why then not apply the | operator on elements
in case of key collisions ...?
- my opinion on this is now that you just *define* - and document -
what the plus operator does for dictionaries, which should be most
naturally defined on the basis of update (for +=) and rhs elements
supersede lhs elements in a similar fashion for + as I would naively
expect.  Having this kind of consistent behaviour for dictionaries
would be reasonably easy to understand and the += / + just becomes a
shorthand form for update (or the suggested updated function).  This
also avoids confusion about different behaviour for different
operations - the update behaviour us how dictionaries behave - and
also limits the amount of new code that needs to be written. Operation
would be associative.

3) This would be different than the behaviour of counters, but
counters are not just plain dictionaries but have well-defined value
types for their keys so operation on items is well defined whereas
this is not the case for a dictionary in general.

I am +1 on both + and +=

-Alexander

From neatnate at gmail.com  Thu Feb 12 22:32:23 2015
From: neatnate at gmail.com (Nathan Schneider)
Date: Thu, 12 Feb 2015 13:32:23 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CACac1F8TJ4OxCJF3ODAkFsfQu_g37ED-CxB7dna0=tpeGG=usQ@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <CAOvn4qgxLy+Kj1v-GM_cJ-iyLXAskhKoK1h8_C1B9N0p5hjT5Q@mail.gmail.com>
 <CACac1F8TJ4OxCJF3ODAkFsfQu_g37ED-CxB7dna0=tpeGG=usQ@mail.gmail.com>
Message-ID: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>

On Thu, Feb 12, 2015 at 12:15 PM, Paul Moore <p.f.moore at gmail.com> wrote:

> On 12 February 2015 at 19:22, Thomas Kluyver <thomas at kluyver.me.uk> wrote:
> > On 12 February 2015 at 11:14, Paul Moore <p.f.moore at gmail.com> wrote:
> >>
> >> (the "new dependency" then would be not supporting Python <3.5 (or
> >> whatever))
> >
> > I've seen this argument a few times on python-ideas, and I don't
> understand
> > it. By this rationale, there's little point ever adding any new feature
> to
> > Python, because people who need to support older versions can't use it.
>
> To me, it means that anything that gets added to Python should be
> significant enough to be worth waiting for.
>
> > By posting on python-ideas, you're implicitly prepared to take the long
> term
> > view - these are ideas that we might be able to really use in a few
> years.
> > One day, relying on Python 3.5 or above will be completely reasonable,
> and
> > then we can take advantage of the things we're adding to it now. It's
> like
> > an investment in code.
>
> That's a fair point. You do make me think - when looking at new
> features, part of the long term view is to look at making sure a
> proposal is clean and well thought through, so that it's something
> people will be looking forward to, not merely something "good enough"
> to provide a feature.
>
> Evaluating the various proposals we've seen on that basis (and I
> concede this is purely my personal feeling):
>
> 1. + and += on dictionaries. Looking far enough into the future, I can
> see a time when people will see these as obvious and natural analogues
> of list and string concatenation. There's a shorter-term problem with
> the transition, particularly the fact that a lot of people currently
> don't find the "last addition wins" behavior obvious, and the
> performance characteristics of repeated addition (with copying). I
> could see dict1 + dict2 being seen as a badly-performing anti-pattern,
> much like string +, perfectly OK for simple cases but don't use it if
> performance matters. And performance is more likely to matter with
> dicts than strings.
>
> 2. | and |=. I can't imagine ever liking these. I don't like them for
> sets, either. Personal opinion certainly.
>
> 3. A dict method. Not sure. It depends strongly on the name chosen.
> And although it's not a strict rule (sets in particular break it) I
> tend to think of methods as being mutating, not creating new copies.
>
> 4. An updated() builtin. Feels clean and has a nice parallel with
> sorted(). But it's a common word that is often used as a variable, and
> I don't think (long term view again!) that a pattern of proliferating
> builtins as non-mutating versions of methods is a good one to
> encourage.
>
>
A reminder about PEP 448: Additional Unpacking Generalizations (
https://www.python.org/dev/peps/pep-0448/), which claims that "it vastly
simplifies types of 'addition' such as combining dictionaries, and does so
in an unambiguous and well-defined way". The spelling for combining two
dicts would be: {**d1, **d2}

Cheers,
Nathan



> >> why not create a function, put it on PyPI and collect usage stats
> >
> > Does anyone seriously believe that people will add a dependency which
> > contains a single three line function?
>
> I'm sorry, that was a snarky suggestion. (Actually my whole post was
> pretty snarky. Hopefully, this post is fairer.)
>
> I do think this is a reasonable situation to invoke the principle that
> not every 3-line code snippet deserves to be a builtin. Steven posted
> a pretty simple updated() function that projects can use, and I don't
> really understand why people can sometimes be so reluctant to include
> such utilities in their code.
>
> Anyway, overall I still remain unconvinced that this is worth adding,
> but hopefully the above explains my thinking better.
>
> Sorry again for the tone of my previous response.
> Paul
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/86fc7924/attachment.html>

From greg.ewing at canterbury.ac.nz  Thu Feb 12 22:52:43 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 13 Feb 2015 10:52:43 +1300
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CACac1F-Q5E5P1Hu6GmZ+M+Pkwx6JHCQNpqMD8bEbH+BxOvDVNw@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <20150212184234.GF2498@ando.pearwood.info> <mbiu32$fqr$1@ger.gmane.org>
 <CACac1F-Q5E5P1Hu6GmZ+M+Pkwx6JHCQNpqMD8bEbH+BxOvDVNw@mail.gmail.com>
Message-ID: <54DD20AB.9050100@canterbury.ac.nz>

Paul Moore wrote:
> On 12 February 2015 at 19:14, Mark Lawrence <breamoreboy at yahoo.co.uk> wrote:
> 
>>You take the first item from the rhs, the second item
>>from the lhs, the third item from the rhs...
> 
> Unless you're left handed when you start with the lhs, obviously.

Unless you live in a country where you drive on the left
side of the road, in which case it's the other way around.

And of course all this is reversed if you're in the
southern hemisphere.

-- 
Greg

From greg.ewing at canterbury.ac.nz  Thu Feb 12 22:56:22 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 13 Feb 2015 10:56:22 +1300
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info> <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
Message-ID: <54DD2186.9080200@canterbury.ac.nz>

There's an obvious way out of all this. We add
*two* new operators:

    d1 >> d2  #  left operand wins
    d1 << d2  #  right operand wins

And if we really want to do it properly:

    d1 ^ d2   #  raise exception on duplicate keys

-- 
Greg


From tjreedy at udel.edu  Thu Feb 12 23:12:04 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 12 Feb 2015 17:12:04 -0500
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <85y4o3dcg2.fsf@benfinney.id.au>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <85y4o3dcg2.fsf@benfinney.id.au>
Message-ID: <mbj8fr$eon$1@ger.gmane.org>

On 2/11/2015 10:40 PM, Ben Finney wrote:
> Terry Reedy <tjreedy at udel.edu> writes:

Nick Coughlin wrote a comment 'pointing his finger' at people who use 
'_' to mean something other than the gettext convert function.  I 
replied ...

>> People are just copying python itself, which defines _ as a throwaway
>> name. I suspect it did so before gettext.
>>
>> By reusing _, gettext made interactive experimentation tricky.

> Gettext has been using the name ?_? since 1995, which was in Python's
> infancy. The name ?_? is pretty obvious, I'm sure it was in use before
> Python came along.

I believe Python 1.0 was released in Feb 1992.  I strongly suspect that 
'_' was bound to expression in the interactive interpreter before the 
gettext module was added.

> So, rather than the point-the-finger phrasing you give above,

By snipping Nick's point-the-finger phrasing, you are being a little 
unfair to me.

> I'd say instead:
>
> By choosing the name ?_?, both Gettext and Python (and any of numerous
> other systems which chose that name) made a name collision likely.

And so I suggested a possible solution.

-- 
Terry Jan Reedy



From chris.barker at noaa.gov  Thu Feb 12 23:14:46 2015
From: chris.barker at noaa.gov (Chris Barker - NOAA Federal)
Date: Thu, 12 Feb 2015 14:14:46 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DD1B20.5030509@canterbury.ac.nz>
References: <5902460774982975660@unknownmsgid>
 <54DD1B20.5030509@canterbury.ac.nz>
Message-ID: <-6722832763365746728@unknownmsgid>

>
> (And no, you can't argue that dict should be given a
> + operator just so you can give it a += operator!)

Well, I like + anyway, so I don't need to even try that argument.

For the record, the fact that + doesn't work for sets is confusing to
non-computer scientists, too. ( or logicians )

-Chris

>
> --
> Greg
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From tjreedy at udel.edu  Thu Feb 12 23:41:24 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 12 Feb 2015 17:41:24 -0500
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <20150212105352.6bf02477@anarchist.wooz.org>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <85y4o3dcg2.fsf@benfinney.id.au>
 <20150212105352.6bf02477@anarchist.wooz.org>
Message-ID: <mbja6q$aat$1@ger.gmane.org>

On 2/12/2015 10:53 AM, Barry Warsaw wrote:
> On Feb 12, 2015, at 02:40 PM, Ben Finney wrote:
>
>> Gettext has been using the name ?_? since 1995, which was in Python's
>> infancy. The name ?_? is pretty obvious, I'm sure it was in use before
>> Python came along.
>
> Exactly.  All this was designed for compatibility with GNU gettext conventions
> such as in i18n'd C programs.  It was especially useful to interoperate with
> source text extraction tools.

Are you referring to then existing tools that would pull out string 
literals inside _(...) and put them in a file for translation?  I can 
see how you (all) would not want to break them.

>  We were aware of the collision with the
> interactive interpreter convenience but that wasn't deemed enough to buck
> established convention.

It leads to this

 >>> import gettext
 >>> gettext.install('foo')
 >>> _
<bound method NullTranslations.gettext of <gettext.NullTranslations 
object at 0x000000000361E978>>
 >>> _('abc')
'abc'
 >>> _
'abc'  # whoops

of
 >>> _('cat')
...
TypeError: 'str' object is not callable

Of course, one can learn to forgo the '_' convenience and test or 
experiment with explicit assignments.

 >>> s = _('cat')
 >>> s
'gato'  # for instance

I presume you decided then that this was less evil that bucking the 
external convention.

>  Using _ to imply "ignore this value" came along afterward.

Since interactive _ is usually ignored, I do not see this usage as all 
that different, except that it extends the conflict to batch made -- 
which is, of course, an important difference.



-- 
Terry Jan Reedy



From ericsnowcurrently at gmail.com  Fri Feb 13 00:28:36 2015
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Thu, 12 Feb 2015 16:28:36 -0700
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
Message-ID: <CALFfu7BtdMAKN1UcqkFpGw4A_nKzhKPZ_UtXURSg6yqueHC0NA@mail.gmail.com>

On Thu, Feb 12, 2015 at 5:25 AM, Donald Stufft <donald at stufft.io> wrote:
> I?ve wanted this several times, explicitly the copying variant of it. I always
> get slightly annoyed whenever I have to manually spell out the copy and the
> update.

copy-and-update:

  dict(old_dict, **other_dict)

-eric

From donald at stufft.io  Fri Feb 13 00:29:47 2015
From: donald at stufft.io (Donald Stufft)
Date: Thu, 12 Feb 2015 18:29:47 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CALFfu7BtdMAKN1UcqkFpGw4A_nKzhKPZ_UtXURSg6yqueHC0NA@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
 <CALFfu7BtdMAKN1UcqkFpGw4A_nKzhKPZ_UtXURSg6yqueHC0NA@mail.gmail.com>
Message-ID: <6AD6F919-634F-44C4-AAA3-928A96608DF2@stufft.io>


> On Feb 12, 2015, at 6:28 PM, Eric Snow <ericsnowcurrently at gmail.com> wrote:
> 
> On Thu, Feb 12, 2015 at 5:25 AM, Donald Stufft <donald at stufft.io> wrote:
>> I?ve wanted this several times, explicitly the copying variant of it. I always
>> get slightly annoyed whenever I have to manually spell out the copy and the
>> update.
> 
> copy-and-update:
> 
>  dict(old_dict, **other_dict)
> 

Only works if other_dict?s keys are all valid keyword arguments and AFAIK is considered an implementation detail of CPython.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/7e2e49a7/attachment.sig>

From carl at oddbird.net  Fri Feb 13 00:38:24 2015
From: carl at oddbird.net (Carl Meyer)
Date: Thu, 12 Feb 2015 16:38:24 -0700
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <6AD6F919-634F-44C4-AAA3-928A96608DF2@stufft.io>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
 <CALFfu7BtdMAKN1UcqkFpGw4A_nKzhKPZ_UtXURSg6yqueHC0NA@mail.gmail.com>
 <6AD6F919-634F-44C4-AAA3-928A96608DF2@stufft.io>
Message-ID: <54DD3970.7090009@oddbird.net>

On 02/12/2015 04:29 PM, Donald Stufft wrote:
> 
>> On Feb 12, 2015, at 6:28 PM, Eric Snow <ericsnowcurrently at gmail.com> wrote:
>>
>> On Thu, Feb 12, 2015 at 5:25 AM, Donald Stufft <donald at stufft.io> wrote:
>>> I?ve wanted this several times, explicitly the copying variant of it. I always
>>> get slightly annoyed whenever I have to manually spell out the copy and the
>>> update.
>>
>> copy-and-update:
>>
>>  dict(old_dict, **other_dict)
>>
> 
> Only works if other_dict?s keys are all valid keyword arguments and AFAIK is considered an implementation detail of CPython.

To be clear, kwargs to the dict() constructor are not an implementation
detail of CPython. The implementation detail of CPython (2.x only) is
that this technique works at all if other_dict has keys which are not
strings. That has been changed for consistency in Python 3, and never
worked in any of the alternative Python implementations AFAIK. So you're
right that this technique is not usable as a general-purpose
copy-and-update.

Also, Guido doesn't like it:
http://mail.python.org/pipermail/python-dev/2010-April/099459.html

I think the fact that it is so often recommended is another bit of
evidence that there is demand for copy-and-update-as-expression, though.

Carl

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/0989621d/attachment.sig>

From ericsnowcurrently at gmail.com  Fri Feb 13 00:43:43 2015
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Thu, 12 Feb 2015 16:43:43 -0700
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150212104310.GA2498@ando.pearwood.info>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
Message-ID: <CALFfu7BS4nc-cRTPrZ2sAvN1S9=An673KE-+Xdtx_CVtergDuA@mail.gmail.com>

On Thu, Feb 12, 2015 at 3:43 AM, Steven D'Aprano <steve at pearwood.info> wrote:
> A very strong -1 on the proposal. We already have a perfectly good way
> to spell dict += , namely dict.update. As for dict + on its own, we have
> a way to spell that too: exactly as you write above.

>From what I understand, the whole point of "d + d" (or "d | d") is as
an alternative to the PEP 448 proposal of allowing multiple keyword
arg unpacking clauses ("**") in function calls.  So instead of "f(**a,
**b)" it would be "f(**a+b)" (or "f(**a|b)").  However, the upside to
the PEP 448 syntax is that merging could be done without requiring an
additional intermediate dict.  Personally, I'd rather have the syntax
than the operator (particularly since it would apply to the dict
constructor as well: "dict(**a, **b, **c)").

-eric

From barry at python.org  Fri Feb 13 00:45:28 2015
From: barry at python.org (Barry Warsaw)
Date: Thu, 12 Feb 2015 18:45:28 -0500
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <85y4o3dcg2.fsf@benfinney.id.au>
 <20150212105352.6bf02477@anarchist.wooz.org>
 <mbja6q$aat$1@ger.gmane.org>
Message-ID: <20150212184528.740fa2a9@anarchist.wooz.org>

On Feb 12, 2015, at 05:41 PM, Terry Reedy wrote:

>Are you referring to then existing tools that would pull out string literals
>inside _(...) and put them in a file for translation?

Yes.

>> We were aware of the collision with the interactive interpreter convenience
>> but that wasn't deemed enough to buck established convention.
>
>It leads to this

[...]

>I presume you decided then that this was less evil that bucking the external
>convention.

Right, and observing that interactive interpreter use has always been rather
special.

Cheers,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/f26a718a/attachment-0001.sig>

From donald at stufft.io  Fri Feb 13 00:47:13 2015
From: donald at stufft.io (Donald Stufft)
Date: Thu, 12 Feb 2015 18:47:13 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CALFfu7BS4nc-cRTPrZ2sAvN1S9=An673KE-+Xdtx_CVtergDuA@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <CALFfu7BS4nc-cRTPrZ2sAvN1S9=An673KE-+Xdtx_CVtergDuA@mail.gmail.com>
Message-ID: <582D9FDE-A8F4-44B0-97C3-0C9DE570D601@stufft.io>


> On Feb 12, 2015, at 6:43 PM, Eric Snow <ericsnowcurrently at gmail.com> wrote:
> 
> On Thu, Feb 12, 2015 at 3:43 AM, Steven D'Aprano <steve at pearwood.info> wrote:
>> A very strong -1 on the proposal. We already have a perfectly good way
>> to spell dict += , namely dict.update. As for dict + on its own, we have
>> a way to spell that too: exactly as you write above.
> 
> From what I understand, the whole point of "d + d" (or "d | d") is as
> an alternative to the PEP 448 proposal of allowing multiple keyword
> arg unpacking clauses ("**") in function calls.  So instead of "f(**a,
> **b)" it would be "f(**a+b)" (or "f(**a|b)").  However, the upside to
> the PEP 448 syntax is that merging could be done without requiring an
> additional intermediate dict.  Personally, I'd rather have the syntax
> than the operator (particularly since it would apply to the dict
> constructor as well: "dict(**a, **b, **c)?)

That?s one potential use case but it can be used in a lot more situations
than just that one. Hence why I said in that thread that being able to
merge dictionaries is a much more general construct than only being able
to merge dictionaries inside of a function call.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/4b095d9c/attachment.sig>

From abarnert at yahoo.com  Fri Feb 13 00:59:24 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Thu, 12 Feb 2015 23:59:24 +0000 (UTC)
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
References: <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
Message-ID: <541820438.2488417.1423785564817.JavaMail.yahoo@mail.yahoo.com>

On Thursday, February 12, 2015 1:08 AM, Ian Lee <ianlee1521 at gmail.com> wrote:

>Alright, I've tried to gather up all of the feedback and organize it in something approaching the alpha draft of a PEP might look like::
>
>Proposed New Methods on dict
>============================

Hold on. If you really only want these to work on dict, as opposed to dict subclasses and collections.abc.Mapping subclasses, you don't need the whole section on which type wins, and you can make everything a whole lot simpler.

On the other hand, if you _do_ want them to work on all mappings, your definition doesn't work at all.

>Adds two dicts together, returning a new object with the type of the left hand
>operand. This would be roughly equivalent to calling:
>
>
>    >>> new_dict = old_dict.copy();
>    >>> new_dict.update(other_dict)

The first problem is that (as you noticed), dict.copy returns a dict, not a type(self), and it copies directly from the underlying dict storage (if you subclass dict and override __fooitem__ but not copy).

I think you can ignore this problem. Note that set.__or__ always returns a set, not a type(self), so set subclasses don't preserve their type with operators. If this is good enough for set.__or__, and for dict.copy, it's probably good enough for dict.__add__, right? More generally, whenever you subclass a builtin type and call its (non-overridden) builtin methods, it generally ignores your subclassiness (e.g., call __add__ on a subclass of int, and you get back an int, not an instance of your subclass).

The bigger problem is that Mapping.copy doesn't exist, and of course Mapping.update doesn't exist, because that's a mutating method (and surely you wouldn't want to define a non-mutating method like + to only exist on MutableMapping?).

This one is harder to dismiss. If you want to define this on Mapping, you can't just hand-wave it and say "it's like calling copy and update if those methods existed and worked this way", because you're going to have to write the actual implementation. Which means you might as well define it as (possibly a simplified version of) whatever you end up writing, and skip the handwaving.

Fortunately, Set.__or__ already gives us a solution, but it's not quite as trivial as what you're suggesting: just construct a new dict from the key-value pairs of both dicts.

Unfortunately, that construction isn't guaranteed behavior for a Mapping, just as construction from an iterable of values isn't guaranteed for a Set; Set handles this by providing a _from_iterable* classmethod (that defaults to just calling the constructor, but can be overridden if that isn't appropriate) to handle the rare cases where it needs to be violated. So, all you need is this:

@classmethod
def _from_iterable(cls, it):
return cls(it)
def __add__(self, rhs):
return self._from_iterable(kv for m in (self, rhs) for kv in m.items())

This gives you an automatic and obvious answer to all the potentially bikesheddable questions:

1. The type is up to type(lhs) to decide, but will default to type(lhs) itself.

2. How duplicate keys are handled is up to type(lhs), but any dict-like type will keep the rightmost.

And it will just work with any reasonable Mapping, or even most unreasonable ones.** OrderedDict will keep the order, Counter could remove its __add__ override if you gave it a _from_iterable,*** etc.

This also means you probably don't need __radd__.

However, Set.__or__ had the advantage that before it was added, Set, and collection ABCs, didn't exist at all, set itself was pretty new, set.__or__ already existed, etc., so there were few if any "legacy" set classes that Set.__or__ would have to work with. Mapping.__add__ doesn't have that advantage. So, let's go over the cases.

But first, note that existing code using legacy mappng classes will be fine, because existing code will never use + on mappings. It's only if you want to write new code, that uses +, with legacy mapping classes that there's a potential problem.

1. Existing dict subclass: Will just work, although it will return a dict, not an instance of the subclass. (As mentioned at the top, I think this is fine.)

2. Existing collections.UserDict subclass: Will just work.

3. Existing collections.abc.Mapping subclass: If its constructor does the right thing with an iterable of key-value pairs (as most, but not all, such types do), it will work out of the box. If it doesn't accept such an iterable, you'll get a TypeError, which seems reasonable--you need to update your Mapping class, because it's not actually a valid Mapping for use in new code. The problem is that if it accepts such an iterable but does the wrong thing (see Counter), you may have a hard-to-debug problem on your hands. I think this is rare enough that you can just mention it in the PEP (and maybe a footnote or comment in the Added section in the docs) and dismiss it, but there's room for argument here.

4. None of the above, but follows the mapping protocol (including any code that still uses the old UserDict recipe instead of the stdlib version): Will raise a TypeError. Which is fine. The only potential problem is that someone might call collections.abc.Mapping.register(MyLegacyMapping), which would now be a lie, but I think that can be mentioned in the PEP and dismissed.

Finally, what about performance?

Unfortunately, my implementation is much slower than a copy/update solution. From a quick test of a class that delegates to a plain dict in the obvious way, it converges to about 3x slower at large N. You can test the same thing without the class:

def a1(d1, d2):
d = d1.copy()
d.update(d2)
return d

def a2(d1, d2):
return type(d1)(kv for m in (d1, d2) for kv in m.items())

d = {i:i for i in range(1000000)}
%timeit a1(d, d)
%timeit a2(d, d)

You could provide an optimized MutableMapping.__add__ to handle this (using copy.copy to deal with the fact that copy isn't guaranteed to be there****), but I don't think it's really necessary. After all, you could get about the same benefit for MutableSet and MutableSequence operators, but the ABCs don't bother.

But there is another advantage of a copy-and-update implementation: most legacy mappings are MutableMappings, and fewer of them are going to fail to be copyable than to be constructible from an iterable of key-value pairs, and those that fail are less likely to do so silently. So, even though in theory it's no better, in practice it may mean fewer problems.

Maybe it would be simpler to add copy to the Mapping API, in which case MutableMapping.__add__ could just rely on that. But that (re-)raises the problem that dict.copy() returns a dict, not a type(self), so now it's violating the Mapping ABC. And I'm not sure you want to change that. The dict.copy method (and its C API equivalent PyDict_Copy) is used all over the place by the internals of Python (e.g., by type, to deal with attribute dictionaries for classes with the default behavior). And the fact that copy isn't documented to return type(self), and isn't defined on Mapping, implies that it's there really more for uses cases that need it to be fast, rather than because it's considered inherent to the type. 

So, I think you're better off just leaving out the copy/update idea.

Meanwhile, one quick answer:

>Adding mappings of different types
>----------------------------------
>
>
>Here is what currently happens in a few cases in Python 3.4, given:
>
>
>    >>> class A(dict): pass
>    >>> class B(dict): pass
>    >>> foo = A({'a': 1, 'b': 'xyz'})
>    >>> bar = B({'a': 5.0, 'c': object()})
>
>
>Currently (this surprised me actually... I guess it gets the parent class?):
>
>
>    >>> baz = foo.copy()
>    >>> type(baz)
>    dict

As mentioned above, copy isn't part of the Mapping or MutableMapping API; it seems to be provided by dict because it's a useful optimization in many cases, that Python itself uses.

If you look at the CPython implementation in Objects/dictobject.c, dict.copy just calls the C API function PyDict_Copy, which calls PyDict_New then PyDict_Merge. This is basically the same as calling d = dict() then d.update(self), but PyDict_Merge can be significantly faster because it can go right to the raw dict storage for self, and can fast-path the case where rhs is an actual dict as well.

---

* Note that Set._from_iterable is only documented in a footnote in the collections.abc docs, and the same is probably fine for Mapping._from_iterable.

** The craziest type I can I think of is an AbsoluteOrderedDict used for caches that may have to be merged together and still preserve insertion order. I actually built one of these, by keeping a tree of TimestampedTuple(key, value) items, sorted by the tuple's timestamp. I'd have to dig up the code, but I'm pretty sure it would just work with this __add__ implementation.

*** Of course Counter.__add__ might be faster than the generic implementation would be, so you might want to keep it anyway. But the fact that they have the same semantics seems like a point in favor of this definition of Mapping.__add__.

**** Or maybe try self.copy() and then call copy.copy(self) on AttributeError. After all, self.copy() could be significantly faster (e.g., see UserDict.copy).

From tjreedy at udel.edu  Fri Feb 13 01:01:15 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 12 Feb 2015 19:01:15 -0500
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <mbh6u0$fr7$1@ger.gmane.org>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org>
Message-ID: <mbjesi$lb5$1@ger.gmane.org>

On 2/11/2015 10:33 PM, Terry Reedy wrote:

> A possible solution would be to give 3.x str a __pos__ or __neg__
> method, so +'python' or -'python' would mean the translation of 'python'
> (the snake).  This would be even *less* intrusive and easier to write
> than _('python').
>
> Since () has equal precedence with . while + and - are lower, we would
> have the following equivalences for mixed expressions with '.':
>
> _(snake.format()) == -snake.format()  # format, then translate
>                 # or -(snake.lower())
> _(snake).format() == (-snake).format()  # translate, then format

The core idea of this proposal is to add a single prefix symbol as an 
alternative to _(...).  Summarizing the alternate proposals:

Zero Piraeus suggests that ~ (.__invert__ (normally bitwise)) would be 
better.  I agree.

M.-A. Lemburg points out that neither of
   label = +'I like' + +'Python'
   label = -'I like' + -'Python'
do not 'read well'.  I do not know how common such concatentions are, 
but I agree that they do not look good.

He then suggests a new string prefix, such as i'Internationalized 
String'.  ('_' could also be used as a prefix.)  The compiler could map 
this to, for instance, sys.i18n('Internationalized String'), where initially
   sys.i18n = lambda s: s
The above code would then become
   label = i'I like' + i'Python'  # or
   label = _'I like' + _'Python'
I like this even better except that it raises the issue of interaction 
with the existing r and b (and u) prefixes.

Ron Adam suggests a str._ property, leading to
   label = 'I like'._ + 'Python'._
I prefer 1 char to 2.  On the other hand, a 'suffix' makes a certain 
sense for indicating translation.  Current string prefixes affect the 
translation of literal to object. To properly process a string literal 
containing '\'s, for instance, both compilers and people need to know 
whether it is to be interpreted 'raw' or 'cooked'. Both also need to 
know whether to read it as text or bytes.  Translation is an 
afterthought, currently performed later.


Barry Warsaw pointed out that existing tools can go thru a file, written 
in whatever language, and extract literal strings inside _(...).  For 
this, distinctive awkwardness is an advantage.  A new syntax would work 
better if there were a script to translate back to the old.

-- 
Terry Jan Reedy


From abarnert at yahoo.com  Fri Feb 13 01:06:44 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Fri, 13 Feb 2015 00:06:44 +0000 (UTC)
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
Message-ID: <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>

On Thursday, February 12, 2015 1:33 PM, Nathan Schneider <neatnate at gmail.com> wrote:


>A reminder about PEP 448: Additional Unpacking Generalizations (https://www.python.org/dev/peps/pep-0448/), which claims that "it 
vastly simplifies types of 'addition' such as combining
dictionaries, and does so in an unambiguous and well-defined way". The 
spelling for combining two dicts would be: {**d1, **d2}


I like that this makes all of the bikeshedding questions obvious: it's a dict display, so the result is clearly a dict rather than a type(d1), it's clearly going to follow the same ordering rules for duplicate keys as a dict display (d2 beats d1), and so on.

And it's nice that it's just a special case of a more general improvement.

However, it is more verbose (and more full of symbols), and it doesn't give you an obvious way to, say, merge two OrderedDicts into an OrderedDict.

Also, code that adds non-literal dicts together can be backported just by adding a trivial dict subclass, but code that uses PEP 448 can't be.

From ericsnowcurrently at gmail.com  Fri Feb 13 01:19:10 2015
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Thu, 12 Feb 2015 17:19:10 -0700
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <6AD6F919-634F-44C4-AAA3-928A96608DF2@stufft.io>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
 <CALFfu7BtdMAKN1UcqkFpGw4A_nKzhKPZ_UtXURSg6yqueHC0NA@mail.gmail.com>
 <6AD6F919-634F-44C4-AAA3-928A96608DF2@stufft.io>
Message-ID: <CALFfu7BCAJMW85=VMZ13WnW342Voz3OOjAHd9qmpfLHPKaZ3Qg@mail.gmail.com>

On Thu, Feb 12, 2015 at 4:29 PM, Donald Stufft <donald at stufft.io> wrote:
>> On Feb 12, 2015, at 6:28 PM, Eric Snow <ericsnowcurrently at gmail.com> wrote:
>> copy-and-update:
>>
>>  dict(old_dict, **other_dict)
>
> Only works if other_dict?s keys are all valid keyword arguments and AFAIK is considered an implementation detail of CPython.

Fair point.  It seems to me that there is definitely enough interest
in a builtin way of doing this.  My vote is for a dict classmethod.

-eric

From ethan at stoneleaf.us  Fri Feb 13 01:21:10 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Thu, 12 Feb 2015 16:21:10 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CALFfu7BCAJMW85=VMZ13WnW342Voz3OOjAHd9qmpfLHPKaZ3Qg@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
 <CALFfu7BtdMAKN1UcqkFpGw4A_nKzhKPZ_UtXURSg6yqueHC0NA@mail.gmail.com>
 <6AD6F919-634F-44C4-AAA3-928A96608DF2@stufft.io>
 <CALFfu7BCAJMW85=VMZ13WnW342Voz3OOjAHd9qmpfLHPKaZ3Qg@mail.gmail.com>
Message-ID: <54DD4376.40807@stoneleaf.us>

On 02/12/2015 04:19 PM, Eric Snow wrote:
> On Thu, Feb 12, 2015 at 4:29 PM, Donald Stufft <donald at stufft.io> wrote:
>>> On Feb 12, 2015, at 6:28 PM, Eric Snow <ericsnowcurrently at gmail.com> wrote:
>>> copy-and-update:
>>>
>>>  dict(old_dict, **other_dict)
>>
>> Only works if other_dict?s keys are all valid keyword arguments and AFAIK is considered an implementation detail of CPython.
> 
> Fair point.  It seems to me that there is definitely enough interest
> in a builtin way of doing this.  My vote is for a dict classmethod.

Like __add__, for example?  ;)

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/62d81a99/attachment.sig>

From ericsnowcurrently at gmail.com  Fri Feb 13 01:26:04 2015
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Thu, 12 Feb 2015 17:26:04 -0700
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
Message-ID: <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>

On Thu, Feb 12, 2015 at 11:21 AM, Thomas Kluyver <thomas at kluyver.me.uk> wrote:
> Or perhaps even a classmethod:
>
> dict.merged(a, b, c)

A dict factory classmethod like this is the best proposal I've seen
thus far. *  It would be nice if the spelling were more succinct
(that's where syntax is helpful).  Imagine:

  some_func(**dict.merged(a, b, c))

-eric

* I'd go for the PEP 448 multiple kwargs-unpacking clauses, but as
already noted, the keys are limited to valid identifiers. Hmm, perhaps
that could be changed just for dict()...

From ethan at stoneleaf.us  Fri Feb 13 01:30:35 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Thu, 12 Feb 2015 16:30:35 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
Message-ID: <54DD45AB.8050400@stoneleaf.us>

On 02/12/2015 04:26 PM, Eric Snow wrote:
> On Thu, Feb 12, 2015 at 11:21 AM, Thomas Kluyver <thomas at kluyver.me.uk> wrote:
>> Or perhaps even a classmethod:
>>
>> dict.merged(a, b, c)
> 
> A dict factory classmethod like this is the best proposal I've seen
> thus far. *  It would be nice if the spelling were more succinct
> (that's where syntax is helpful).  Imagine:
> 
>   some_func(**dict.merged(a, b, c))

That looks an awful lot like

    some_func(**chainmap(a, b, c))

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/9562ee2b/attachment.sig>

From neatnate at gmail.com  Fri Feb 13 01:44:17 2015
From: neatnate at gmail.com (Nathan Schneider)
Date: Thu, 12 Feb 2015 19:44:17 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DD2186.9080200@canterbury.ac.nz>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
 <54DD2186.9080200@canterbury.ac.nz>
Message-ID: <CADQLQrUx_vEHHWK1DrFVVS-HaHLm9h0QEJohpi1=_aRmw5JLqA@mail.gmail.com>

On Thu, Feb 12, 2015 at 4:56 PM, Greg Ewing <greg.ewing at canterbury.ac.nz>
wrote:

> There's an obvious way out of all this. We add
> *two* new operators:
>
>    d1 >> d2  #  left operand wins
>    d1 << d2  #  right operand wins
>
> And if we really want to do it properly:
>
>    d1 ^ d2   #  raise exception on duplicate keys
>
>
+1 for having a standard solution to this problem. I have encountered it
enough times to believe it is worth solving.

If the solution is to be with operators, then +1 for << and >> as the least
ambiguous. The only downside in my mind is that the need for combining
dicts will be fairly rare, so there might not be enough justification to
create new idioms for << and >>.

Because it is a rare enough use case, though, a non-operator method might
be the way to go, even though it is less algebraically pure.

I like the idea of an operator or method that raises an exception on
duplicate keys, but am not sure
(a) whether such an exception should be raised if the two keys are mapped
to equal values, or only if the two dicts are "in conflict" for some common
key, and
(b) whether, by analogy to set, ^ suggests that duplicate keys should be
*omitted* from the result rather than triggering an exception.

Nathan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/5bd8852b/attachment.html>

From python at mrabarnett.plus.com  Fri Feb 13 02:05:47 2015
From: python at mrabarnett.plus.com (MRAB)
Date: Fri, 13 Feb 2015 01:05:47 +0000
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DD2186.9080200@canterbury.ac.nz>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info> <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
 <54DD2186.9080200@canterbury.ac.nz>
Message-ID: <54DD4DEB.2060304@mrabarnett.plus.com>

On 2015-02-12 21:56, Greg Ewing wrote:
> There's an obvious way out of all this. We add
> *two* new operators:
>
>      d1 >> d2  #  left operand wins
>      d1 << d2  #  right operand wins
>
Next people will be suggesting for an operator that subtracts the LHS
from the RHS!

> And if we really want to do it properly:
>
>      d1 ^ d2   #  raise exception on duplicate keys
>


From python at mrabarnett.plus.com  Fri Feb 13 02:24:43 2015
From: python at mrabarnett.plus.com (MRAB)
Date: Fri, 13 Feb 2015 01:24:43 +0000
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DD45AB.8050400@stoneleaf.us>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <54DD45AB.8050400@stoneleaf.us>
Message-ID: <54DD525B.9050009@mrabarnett.plus.com>

On 2015-02-13 00:30, Ethan Furman wrote:
> On 02/12/2015 04:26 PM, Eric Snow wrote:
>> On Thu, Feb 12, 2015 at 11:21 AM, Thomas Kluyver <thomas at kluyver.me.uk> wrote:
>>> Or perhaps even a classmethod:
>>>
>>> dict.merged(a, b, c)
>>
>> A dict factory classmethod like this is the best proposal I've seen
>> thus far. *  It would be nice if the spelling were more succinct
>> (that's where syntax is helpful).  Imagine:
>>
>>   some_func(**dict.merged(a, b, c))
>
> That looks an awful lot like
>
>      some_func(**chainmap(a, b, c))
>
How about making d1 | d2 return an iterator?

You could then merge dicts with no intermediate dict:

     merged = dict(a | b | c)

     some_func(**a | b | c)


From tjreedy at udel.edu  Fri Feb 13 02:39:33 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 12 Feb 2015 20:39:33 -0500
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <20150212110929.1cc16d6c@anarchist.wooz.org>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <20150212110929.1cc16d6c@anarchist.wooz.org>
Message-ID: <mbjkks$cec$1@ger.gmane.org>

On 2/12/2015 11:09 AM, Barry Warsaw wrote:

> [alternate syntax] is really unnecessary.
 > These days, I believe standard GNU xgettext can
> parse Python code and can be told to use an alternative marker function.

My current interest in gettext is due to (incomplete) proposals (such as 
#17776) to internationalize parts of Idle.  There are no instances of 
'_,' in idlelib, and probably no other use of '_' as an identifier, so 
injection of '_' into builtins is not my concern for batch (production) 
mode running.

But I don't like turning code like this (idlelib/Binding.py)

(_'file', [
    ('_New File', '<<open-new-window>>'),
    ('_Open...', '<<open-window-from-file>>'),
    ('Open _Module...', '<<open-module>>'),
    ('Class _Browser', '<<open-class-browser>>'),
    ('_Path Browser', '<<open-path-browser>>'),
    ... and so on for another 59 lines

loaded with tuple parens and _Hotkey markers into the following with () 
and _ doing double duty and doubled in number.

  (_('file'), [
    (_('_New File'), '<<open-new-window>>'),
    (_('_Open...'), '<<open-window-from-file>>'),
    (_('Open _Module...'), '<<open-module>>'),
    (_('Class _Browser'), '<<open-class-browser>>'),
    (_('_Path Browser'), '<<open-path-browser>>'),
    ... and so on for another 59 lines

> Tools/i18n/pygettext.py certainly can.

Are there any tests of this script, especially for 3.x, other than the 
two examples at the end?

> I might as well use this opportunity to plug my flufl.i18n package:
> http://pythonhosted.org/flufl.i18n/
> It provides a nice, higher level API for doing i18n.

The short "Single language context" doc section is certainly a lot 
simpler and clearer than the gettext doc.  But I cannot use your package 
for Idle.  I might, however, use your idea of using rot13 for testing.

I18n of existing code would be easier if Tools/i18n had a stringmark.py 
script that would either mark all strings with _() (or alternative) or 
interactively let one say yes or no to each.

-- 
Terry Jan Reedy


From ethan at stoneleaf.us  Fri Feb 13 02:40:23 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Thu, 12 Feb 2015 17:40:23 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DD45AB.8050400@stoneleaf.us>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <54DD45AB.8050400@stoneleaf.us>
Message-ID: <54DD5607.7080901@stoneleaf.us>

On 02/12/2015 04:30 PM, Ethan Furman wrote:
> On 02/12/2015 04:26 PM, Eric Snow wrote:
>> On Thu, Feb 12, 2015 at 11:21 AM, Thomas Kluyver <thomas at kluyver.me.uk> wrote:
>>> Or perhaps even a classmethod:
>>>
>>> dict.merged(a, b, c)
>>
>> A dict factory classmethod like this is the best proposal I've seen
>> thus far. *  It would be nice if the spelling were more succinct
>> (that's where syntax is helpful).  Imagine:
>>
>>   some_func(**dict.merged(a, b, c))
> 
> That looks an awful lot like
> 
>     some_func(**chainmap(a, b, c))

or maybe that should be

     some_func(**chainmap(c, b, a))

?

Whatever we choose, if we choose anything, should keep the rightmost wins behavior.

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/0ac69225/attachment.sig>

From steve at pearwood.info  Fri Feb 13 02:48:05 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 13 Feb 2015 12:48:05 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAN3CYHxsSxuUX+WXkc3jtdSugEFEFm_H39mgF73ddF-U9Ktexw@mail.gmail.com>
References: <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info> <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
 <CACac1F_9v9tF-saVD_MHsrGfWRx6fyxxc+o2tff5HXRob5hxKQ@mail.gmail.com>
 <CAN3CYHxsSxuUX+WXkc3jtdSugEFEFm_H39mgF73ddF-U9Ktexw@mail.gmail.com>
Message-ID: <20150213014805.GH2498@ando.pearwood.info>

On Fri, Feb 13, 2015 at 08:30:09AM +1100, Alexander Heger wrote:
> I had started a previous thread on this, asking why there was no
> addition defined for dictionaries, along the line of what Peter argued
> a novice would naively expect.

A programmer is a novice for about 1% of their lifespan as a programmer. 
Python should be novice-friendly. It shouldn't be designed around what 
novices expect.

I wish that the people blithely declaring that novices expect this and 
novices expect that would actually spend some time on the tutor mailing 
list. I do spend time on the tutor mailing list, and what I have found 
is this:

- novices don't care about dicts much at all
- and updating them even less
- novices rarely try things in the interactive interpreter to see
  what they do (one of the things which separates novice from junior 
  is the confidence to just do it and see what happens)
- on the rare occasion they search for information, they find it very 
  hard to search for operators (that applies to everyone, I think)
- apart from the mathematical use of operators, which they learn in 
  school, they don't tend to think of using operators much.

I don't think that "novices would naively expect this" is correct, and 
even if it were, I don't think the language should be designed around 
what novices naively expect.


> ... some deja-vu on the discussion ...
> yes, the arguments were then as now
> 
> 1) whether to use + or |, other operators that suggest asymmetric
> (non-commutative) operation were suggested as well
> - in my opinion dictionaries are not sets, so the use of | to indicate
> set-like behaviour brings no real advantage ... + seems the most
> natural for a novice and is clear enough

I was a novice once, and I was surprised that Python used + for 
concatenation. Why would people expect + for something that isn't 
addition? I don't know where this idea comes from that it is natural and 
obvious, I've had to teach beginners that they can "add" strings or 
lists to concatenate them.

As far as I was, and still am, concerned, & is the obvious and most 
natural operator for concatenation. [1, 2]+[3, 4] should return [4, 6], 
and sum(bunch of lists) should be a meaningless operation, like 
sum(bunch of HTTP servers). Or sum(bunch of dicts).


> 2) obviously, there were the same questions whether in case of key
> collisions the elements should be added - i.e., apply the + operator
> to the elements; 

No, that's not the argument. The argument is not that "we used the + 
operator to merge two dicts, so merging should + the values". The 
argument is that in the event of a duplicate key, adding the values 
is sometimes a common and useful thing to do.

Take Chris A's shopping list analogy. You need a loaf of bread, so you 
put it on your shopping list (actually a dict): {'bread': 1}. I need a 
loaf of bread too, so I do the same. Merging your dict and my dict 
better have two loaves of bread, otherwise one of us is going to miss 
out.


-- 
Steve

From python at mrabarnett.plus.com  Fri Feb 13 02:55:50 2015
From: python at mrabarnett.plus.com (MRAB)
Date: Fri, 13 Feb 2015 01:55:50 +0000
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <mbjesi$lb5$1@ger.gmane.org>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <mbjesi$lb5$1@ger.gmane.org>
Message-ID: <54DD59A6.60803@mrabarnett.plus.com>

On 2015-02-13 00:01, Terry Reedy wrote:
> On 2/11/2015 10:33 PM, Terry Reedy wrote:
>
>> A possible solution would be to give 3.x str a __pos__ or __neg__
>> method, so +'python' or -'python' would mean the translation of 'python'
>> (the snake).  This would be even *less* intrusive and easier to write
>> than _('python').
>>
>> Since () has equal precedence with . while + and - are lower, we would
>> have the following equivalences for mixed expressions with '.':
>>
>> _(snake.format()) == -snake.format()  # format, then translate
>>                 # or -(snake.lower())
>> _(snake).format() == (-snake).format()  # translate, then format
>
> The core idea of this proposal is to add a single prefix symbol as an
> alternative to _(...).  Summarizing the alternate proposals:
>
> Zero Piraeus suggests that ~ (.__invert__ (normally bitwise)) would be
> better.  I agree.
>
> M.-A. Lemburg points out that neither of
>     label = +'I like' + +'Python'
>     label = -'I like' + -'Python'
> do not 'read well'.  I do not know how common such concatentions are,
> but I agree that they do not look good.
>
> He then suggests a new string prefix, such as i'Internationalized
> String'.  ('_' could also be used as a prefix.)  The compiler could map
> this to, for instance, sys.i18n('Internationalized String'), where initially
>     sys.i18n = lambda s: s
> The above code would then become
>     label = i'I like' + i'Python'  # or
>     label = _'I like' + _'Python'
> I like this even better except that it raises the issue of interaction
> with the existing r and b (and u) prefixes.
>
I think we can forget about any interaction with the b prefix. If
you're internationalising, you're using Unicode (aren't you?).

And as for the u prefix, it's really only used for compatibility with
Python 2, which won't have the i prefix anyway.

That leaves us with just i and ir.

> Ron Adam suggests a str._ property, leading to
>     label = 'I like'._ + 'Python'._
> I prefer 1 char to 2.  On the other hand, a 'suffix' makes a certain
> sense for indicating translation.  Current string prefixes affect the
> translation of literal to object. To properly process a string literal
> containing '\'s, for instance, both compilers and people need to know
> whether it is to be interpreted 'raw' or 'cooked'. Both also need to
> know whether to read it as text or bytes.  Translation is an
> afterthought, currently performed later.
>
>
> Barry Warsaw pointed out that existing tools can go thru a file, written
> in whatever language, and extract literal strings inside _(...).  For
> this, distinctive awkwardness is an advantage.  A new syntax would work
> better if there were a script to translate back to the old.
>


From ericsnowcurrently at gmail.com  Fri Feb 13 03:18:07 2015
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Thu, 12 Feb 2015 19:18:07 -0700
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DD5607.7080901@stoneleaf.us>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <54DD45AB.8050400@stoneleaf.us> <54DD5607.7080901@stoneleaf.us>
Message-ID: <CALFfu7AG2Kow+VtigHW90fkMRurMQfhbjE6JfvLAKF6a=GHdYg@mail.gmail.com>

On Feb 12, 2015 6:41 PM, "Ethan Furman" <ethan at stoneleaf.us> wrote:
>
> On 02/12/2015 04:30 PM, Ethan Furman wrote:
> > On 02/12/2015 04:26 PM, Eric Snow wrote:
> >> On Thu, Feb 12, 2015 at 11:21 AM, Thomas Kluyver <thomas at kluyver.me.uk>
wrote:
> >>> Or perhaps even a classmethod:
> >>>
> >>> dict.merged(a, b, c)
> >>
> >> A dict factory classmethod like this is the best proposal I've seen
> >> thus far. *  It would be nice if the spelling were more succinct
> >> (that's where syntax is helpful).  Imagine:
> >>
> >>   some_func(**dict.merged(a, b, c))
> >
> > That looks an awful lot like
> >
> >     some_func(**chainmap(a, b, c))
>
> or maybe that should be
>
>      some_func(**chainmap(c, b, a))
>
> ?

Right.  With chainmap leftmost wins.

-eric

>
> Whatever we choose, if we choose anything, should keep the rightmost wins
behavior.
>
> --
> ~Ethan~
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/4582e228/attachment.html>

From ericsnowcurrently at gmail.com  Fri Feb 13 03:21:56 2015
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Thu, 12 Feb 2015 19:21:56 -0700
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DD525B.9050009@mrabarnett.plus.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <54DD45AB.8050400@stoneleaf.us>
 <54DD525B.9050009@mrabarnett.plus.com>
Message-ID: <CALFfu7DDQPu3vuKVAkcKWh0khUGoFiCVYsGa=FRHwSrKKss6JA@mail.gmail.com>

On Feb 12, 2015 6:25 PM, "MRAB" <python at mrabarnett.plus.com> wrote:
> How about making d1 | d2 return an iterator?
>
> You could then merge dicts with no intermediate dict:
>
>     merged = dict(a | b | c)
>
>     some_func(**a | b | c)
>

I like that!

-eric
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/e9002746/attachment.html>

From steve at pearwood.info  Fri Feb 13 03:24:05 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 13 Feb 2015 13:24:05 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
Message-ID: <20150213022405.GI2498@ando.pearwood.info>

On Fri, Feb 13, 2015 at 12:06:44AM +0000, Andrew Barnert wrote:
> On Thursday, February 12, 2015 1:33 PM, Nathan Schneider <neatnate at gmail.com> wrote:
> 
> 
> > A reminder about PEP 448: Additional Unpacking Generalizations 
> > (https://www.python.org/dev/peps/pep-0448/), which claims that "it 
> > vastly simplifies types of 'addition' such as combining
> > dictionaries, and does so in an unambiguous and well-defined way". 
> > The spelling for combining two dicts would be: {**d1, **d2}

Very nice! That should be extensible to multiple arguments without the 
pathologically slow performance of repeated addition:

{**d1, **d2, **d3, **d4}

and you can select which dict you want to win in the event of clashes 
by changing the order.


> I like that this makes all of the bikeshedding questions obvious: it's 
> a dict display, so the result is clearly a dict rather than a 
> type(d1), it's clearly going to follow the same ordering rules for 
> duplicate keys as a dict display (d2 beats d1), and so on.
> 
> And it's nice that it's just a special case of a more general 
> improvement.
> 
> However, it is more verbose (and more full of symbols), and it doesn't 
> give you an obvious way to, say, merge two OrderedDicts into an 
> OrderedDict.

Here's a more general proposal. The dict constructor currently takes 
either a single mapping or a single iterable of (key,value) pairs. The 
update method takes a single mapping, or a single iterable of (k,v) 
pairs, AND any arbitrary keyword arguments.

We should generalise both of these to take *one or more* mappings and/or 
iterables, and solve the most common forms of copy-and-update. That 
avoids the (likely) pathologically slow behaviour of repeated addition, 
avoids any confusion over operators and having += duplicating the 
update method.

Then, merging in place uses:

d1.update(d2, d3, d4, d5)

and copy-and-merge uses:

dict(d1, d2, d3, d4, d5)  # like d1+d2+d3+d4+d5

where the d's can be any mapping, and you can optionally include keyword 
arguments as well.

You don't have to use dict, you can use any Mapping which uses the same 
constructor semantics. That means subclasses of dict ought to work 
(unless the subclass does something silly) and even OrderedDict ought to 
work (modulo the fact that regular dicts and keyword args are 
unordered).

Here's a proof of concept:

class Dict(dict):
    def __init__(self, *mappings, **kwargs):
        assert self == {}
        Dict.update(self, *mappings, **kwargs)

    def update(self, *mappings, **kwargs):
        for mapping in (mappings + (kwargs,)):
            if hasattr(mapping, 'keys'):
                for k in mapping:
                    self[k] = mapping[k]
            else:
                for (k,v) in mapping:
                    self[k] = v



-- 
Steve

From ericsnowcurrently at gmail.com  Fri Feb 13 03:25:24 2015
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Thu, 12 Feb 2015 19:25:24 -0700
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
Message-ID: <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>

On Feb 12, 2015 5:26 PM, "Eric Snow" <ericsnowcurrently at gmail.com> wrote:
>
> On Thu, Feb 12, 2015 at 11:21 AM, Thomas Kluyver <thomas at kluyver.me.uk>
wrote:
> > Or perhaps even a classmethod:
> >
> > dict.merged(a, b, c)
>
> A dict factory classmethod like this is the best proposal I've seen
> thus far. *  It would be nice if the spelling were more succinct
> (that's where syntax is helpful).  Imagine:
>
>   some_func(**dict.merged(a, b, c))

Or just make "dict(a, b, c)" work.

-eric
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/a1aa2965/attachment.html>

From abarnert at yahoo.com  Fri Feb 13 03:28:46 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Thu, 12 Feb 2015 18:28:46 -0800
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <mbjkks$cec$1@ger.gmane.org>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <20150212110929.1cc16d6c@anarchist.wooz.org>
 <mbjkks$cec$1@ger.gmane.org>
Message-ID: <3BEA29A7-04D6-4A47-B5CA-880348824EFB@yahoo.com>

On Feb 12, 2015, at 17:39, Terry Reedy <tjreedy at udel.edu> wrote:

> I18n of existing code would be easier if Tools/i18n had a stringmark.py script that would either mark all strings with _() (or alternative) or interactively let one say yes or no to each.

One nice thing about the existing idiom is that you can use tools designed for perl or C and they do the right thing. Except, of course, that they don't handle triple quotes.

At a job around 10-12 years ago, we used this horrible commercial Windows tool for internationalization that did this, among other things. When I wanted to write some new server code in Python, management objected that it wouldn't work with our existing i18n tools. I was able to whip up a pre- and post-processor to deal with the triple quotes, and integrate it into the toolchain, in half a day. If I couldn't have done that, I probably wouldn't have been able to use Python.

By the way the injecting _ into builtins thing was actually a (tiny) problem. In C, only files that include the header have _, so the rest can be skipped; in Python, they all get _ without an import, so you have to assume they all need to be handled. (I solved that by actually importing gettext in all the modules even though it wasn't necessary, so the preprocessor could turn that import into the include the tool expected.)

Hopefully things aren't quite as bad as they were back then, but there must still be some nice gettext-style tools that either work, or can be hacked around easily, with Python gettext with the _ idiom, and I have no idea what fraction of them would be lost if we used a different idiom.

Of course there's be nothing stopping anyone from using the old _ idiom even if we added a new one, so don't take this as a vote against coming up with something better.

From steve at pearwood.info  Fri Feb 13 03:45:22 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 13 Feb 2015 13:45:22 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
References: <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
Message-ID: <20150213024521.GJ2498@ando.pearwood.info>

On Thu, Feb 12, 2015 at 07:25:24PM -0700, Eric Snow wrote:

> Or just make "dict(a, b, c)" work.

I've come to the same conclusion.

Pros:

- No more arguments about which operator is more intuitive, whether 
  commutativity or associativity is more important, etc. It's just a 
  short, simple function call.

- Left-to-right semantics are obvious to anyone who reads left-to-right.

- No new methods or built-in functions needed.

- Want a subclass? Just call it instead of dict: MyDict(a, b, c). This 
  should even work with OrderedDict, modulo the usual 
  dicts-are-unordered issues.

- Like dict.update, this can support iterables of (key,value) 
  pairs, and optional keyword args.

- dict.update should also be extended to support multiple mappings.

- No pathologically slow performance from repeated addition.

- Although the names are slightly different, we have symmetry between 
  update-in-place using the update method, and copy-and-update using the 
  dict constructor.

- Surely this change is minor enough that it doesn't need a PEP? It just
  needs a patch and approval from a senior developer with commit 
  privileges.


Cons:

- dict(a,b,c,d) takes 6 characters more to write than a+b+c+d.

- Doesn't support the less common merge semantics to deal with 
  duplicate keys, such as adding the values. But then neither would
  a + operator.

- The implementation for immutable mappings is not obvious. We can't 
  literally define the semantics in terms of update, since 
  Mapping.update doesn't exist:


py> from collections import Mapping, MutableMapping
py> Mapping.update
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: type object 'Mapping' has no attribute 'update'
py> MutableMapping.update
<function MutableMapping.update at 0xb7c1353c>





-- 
Steven

From tjreedy at udel.edu  Fri Feb 13 04:05:04 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 12 Feb 2015 22:05:04 -0500
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <mbja6q$aat$1@ger.gmane.org>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <85y4o3dcg2.fsf@benfinney.id.au>
 <20150212105352.6bf02477@anarchist.wooz.org> <mbja6q$aat$1@ger.gmane.org>
Message-ID: <mbjpl6$mkf$1@ger.gmane.org>

On 2/12/2015 5:41 PM, Terry Reedy wrote:

> Of course, one can learn to forgo the '_' convenience and test or
> experiment with explicit assignments.
>
>  >>> s = _('cat')
>  >>> s
> 'gato'  # for instance

As pointed out to me privately, this rebinds _ to the value of s.
So instead

 >>> print(s)
gato

-- 
Terry Jan Reedy


From abarnert at yahoo.com  Fri Feb 13 04:40:23 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Thu, 12 Feb 2015 19:40:23 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150213022405.GI2498@ando.pearwood.info>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info>
Message-ID: <75F9426A-116A-475F-A22C-D5FE8E4A522A@yahoo.com>

On Feb 12, 2015, at 18:24, Steven D'Aprano <steve at pearwood.info> wrote:

> On Fri, Feb 13, 2015 at 12:06:44AM +0000, Andrew Barnert wrote:
>> On Thursday, February 12, 2015 1:33 PM, Nathan Schneider <neatnate at gmail.com> wrote:
>> 
>> 
>>> A reminder about PEP 448: Additional Unpacking Generalizations 
>>> (https://www.python.org/dev/peps/pep-0448/), which claims that "it 
>>> vastly simplifies types of 'addition' such as combining
>>> dictionaries, and does so in an unambiguous and well-defined way". 
>>> The spelling for combining two dicts would be: {**d1, **d2}
> 
> Very nice! That should be extensible to multiple arguments without the 
> pathologically slow performance of repeated addition:
> 
> {**d1, **d2, **d3, **d4}
> 
> and you can select which dict you want to win in the event of clashes 
> by changing the order.
> 
> 
>> I like that this makes all of the bikeshedding questions obvious: it's 
>> a dict display, so the result is clearly a dict rather than a 
>> type(d1), it's clearly going to follow the same ordering rules for 
>> duplicate keys as a dict display (d2 beats d1), and so on.
>> 
>> And it's nice that it's just a special case of a more general 
>> improvement.
>> 
>> However, it is more verbose (and more full of symbols), and it doesn't 
>> give you an obvious way to, say, merge two OrderedDicts into an 
>> OrderedDict.
> 
> Here's a more general proposal. The dict constructor currently takes 
> either a single mapping or a single iterable of (key,value) pairs. The 
> update method takes a single mapping, or a single iterable of (k,v) 
> pairs, AND any arbitrary keyword arguments.

The constructor also takes any arbitrary keyword items.

(In fact, if you look at the source, dict_init, just calls the same dict_update_common function that update calls. So it's as if you defined __init__(self, *args, **kwargs) and update(self, *args, **kwargs) to both just call self.__update_common(*args, **kwargs).)

> We should generalise both of these to take *one or more* mappings and/or 
> iterables, and solve the most common forms of copy-and-update.

I like this idea, but it's not perfect--see below.

> That 
> avoids the (likely) pathologically slow behaviour of repeated addition, 
> avoids any confusion over operators and having += duplicating the 
> update method.
> 
> Then, merging in place uses:
> 
> d1.update(d2, d3, d4, d5)
> 
> and copy-and-merge uses:
> 
> dict(d1, d2, d3, d4, d5)  # like d1+d2+d3+d4+d5
> 
> where the d's can be any mapping, and you can optionally include keyword 
> arguments as well.
> 
> You don't have to use dict, you can use any Mapping which uses the same 
> constructor semantics.

Except that every existing Mapping that anyone has developed so far doesn't use the same constructor semantics, and very few of them will automatically change to do so just because dict does. Basically, unless it subclasses or delegates to dict and either doesn't override __init__, or does so with *args, **kwargs and passes those along untouched, it won't work. And that's not very common.

For example, despite what you say below, OrderedDict (as currently written) will not work; it never passes the __init__ arguments to dict. Or UserDict; it passes them to MutableMapping instead, and only after checking that there's only 0 or 1 positional arguments. Or any of three third-party sorted dicts I looked at.

I think defaultdict actually will work (its __init__ slot pulls the first argument, then passes the rest to dict blindly, and its update is inherited from dict), as will the Python-equivalent recipe (which I'd guess is available on PyPI as a backport for 2.4, but I haven't looked).

So, this definitely is not as general a solution as adding a new method to dict and Mapping would be; it will only help dict, a handful of other classes.

Still, I like this idea.

It defines a nice idiom for dict-like constructors to follow--and classes written to that idiom will actually work out of the box in 3.4, and even 2.7. That's pretty cool.

And for people who want to use this with plain old dicts in 2.7 and 3.4, all they need to do is use your trivial dict subclass. (Of course that still won't let them call update with multiple args on a dict created as a literal or passed in from third-party code, but it will let them do the copy-and-update the same way as in 3.5, and I think that's more important than the update-with-multiple-args.)

It might be worth changing MutableMapping.update to handle the new signature, but that raises the question of whether that's a backward-incompatible change. And, because not all MutableMapping classes delegate __init__ to update the way OrderedDict does, it still doesn't solve every type (or even most types).

> That means subclasses of dict ought to work 
> (unless the subclass does something silly) and even OrderedDict ought to 
> work (modulo the fact that regular dicts and keyword args are 
> unordered).

I suppose it depends what you mean by "something silly", but I think it's pretty common for subclasses to define their own __init__, and sometimes update, and not pass the args through blindly.

More importantly, I don't think dict subclasses are nearly as common as mapping-protocol classes that delegate to or don't even touch a dict (whether subclasses of Mapping or not), so I don't think it would matter all that much if dict subclasses worked.

And again, OrderedDict will not work.

> Here's a proof of concept:
> 
> class Dict(dict):
>    def __init__(self, *mappings, **kwargs):
>        assert self == {}
>        Dict.update(self, *mappings, **kwargs)

If you want to ensure that overriding update doesn't affect __init__ (I'm not _sure_ that you do--yes, UserDict does so, but UserDict is trying to make sure it has exactly the same behavior as dict when subclassed, not trying to improve the behavior of dict), why not have both __init__ and update delegate to a private __update method (as UserDict does)? I'm not sure it makes much difference, but it seems like a more explicit way of saying "this will not use overridden behavior".

>    def update(self, *mappings, **kwargs):
>        for mapping in (mappings + (kwargs,)):
>            if hasattr(mapping, 'keys'):
>                for k in mapping:
>                    self[k] = mapping[k]
>            else:
>                for (k,v) in mapping:
>                    self[k] = v

The current dict constructor/update function iterates over mapping.keys(), not directly over mapping; that means it work with "hybrid" types that iterate like sequences (or something else) but also have keys/values/items. And of course it has special fast-path code for dealing with real dict objects. (Also see MutableMapping.update; it also iterates over mapping.keys() if hasattr(mapping, 'keys'), and it also has a fast-path iterate over mapping only for really with real Mapping classes.)

I think this would be simpler, more correct, and also probably much more efficient, for a pure-Python wrapper class:

    def update(self, *mappings, **kwargs):
        for mapping in (mappings + (kwargs,)):
            dict.update(self, mapping)

And for the real C definition, you're basically just moving the guts of dict_update_common (which are trivial) into a loop over the args instead of an if over a single arg.


From chris.barker at noaa.gov  Fri Feb 13 04:43:36 2015
From: chris.barker at noaa.gov (Chris Barker - NOAA Federal)
Date: Thu, 12 Feb 2015 19:43:36 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150213022405.GI2498@ando.pearwood.info>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info>
Message-ID: <7262978700952906911@unknownmsgid>

> avoids any confusion over operators and having += duplicating the
> update method.

+= duplicates the extend method on lists.

And it's really redundant for numbers, too:

x += y

x = x + y

So plenty of precedent.

And my experience with newbies (been teaching intro to python for a
few years) is that they grab onto + for concatenating strings really
quickly. And I have a hard time getting them to use other methods for
building up strings.

Dicts in general come later, but are key to python. But I don't know
that newbies expect + to work for dicts or not -- updating a dict is
simply a lot less common.

-Chris

From abarnert at yahoo.com  Fri Feb 13 04:52:09 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Thu, 12 Feb 2015 19:52:09 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150213024521.GJ2498@ando.pearwood.info>
References: <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
Message-ID: <88C2AD85-4F6E-4137-9698-E517DC19C3DD@yahoo.com>

On Feb 12, 2015, at 18:45, Steven D'Aprano <steve at pearwood.info> wrote:

> On Thu, Feb 12, 2015 at 07:25:24PM -0700, Eric Snow wrote:
> 
>> Or just make "dict(a, b, c)" work.
> 
> I've come to the same conclusion.
> 
> Pros:
> 
> - No more arguments about which operator is more intuitive, whether 
>  commutativity or associativity is more important, etc. It's just a 
>  short, simple function call.
> 
> - Left-to-right semantics are obvious to anyone who reads left-to-right.
> 
> - No new methods or built-in functions needed.
> 
> - Want a subclass? Just call it instead of dict: MyDict(a, b, c). This 
>  should even work with OrderedDict, modulo the usual 
>  dicts-are-unordered issues.

Except that it doesn't work for OrderedDict. Or UserDict, or blist.sorteddict, or anything else, until those are changed.

> - Like dict.update, this can support iterables of (key,value) 
>  pairs, and optional keyword args.

Which is already true of dict.__init__.
It has the exact same signature--in fact, the example same implementation--as dict.update.

> - dict.update should also be extended to support multiple mappings.
> 
> - No pathologically slow performance from repeated addition.
> 
> - Although the names are slightly different, we have symmetry between 
>  update-in-place using the update method, and copy-and-update using the 
>  dict constructor.
> 
> - Surely this change is minor enough that it doesn't need a PEP? It just
>  needs a patch and approval from a senior developer with commit 
>  privileges.

I'm not sure about this. If you want to change MutableMapping.update too, you'll be potentially breaking all kinds of existing classes that claim to be a MutableMapping and override update with a method with a now-incorrect signature.

> Cons:
> 
> - dict(a,b,c,d) takes 6 characters more to write than a+b+c+d.
> 
> - Doesn't support the less common merge semantics to deal with 
>  duplicate keys, such as adding the values. But then neither would
>  a + operator.
> 
> - The implementation for immutable mappings is not obvious. We can't 
>  literally define the semantics in terms of update, since 
>  Mapping.update doesn't exist:

Of course, since update is a mutating method.

More importantly, neither Mapping nor MutableMapping defines anything at all about its constructor behavior (and neither provides anything as mixin behavior--if you don't define __init__, you get object.__init__, which takes no params and does nothing.

So this doesn't actually change anything at all except dict. All it can do for other mappings is to inspire them to adopt the new 3.5 dict construction behavior.

And the issue with immutable Mapping types isn't that serious--or, rather, it's no more so with this change than it is today. The obvious way to implement MutableMapping.__init__ to be dict-like is to delegate to update (but again, remember that you already have to do so explicitly); there is no obvious way to implement Mapping.__init__, so you have to work something out that makes sense for your specific immutable type. The exact same thing is true after the proposed change.

> py> from collections import Mapping, MutableMapping
> py> Mapping.update
> Traceback (most recent call last):
>  File "<stdin>", line 1, in <module>
> AttributeError: type object 'Mapping' has no attribute 'update'
> py> MutableMapping.update
> <function MutableMapping.update at 0xb7c1353c>

From lkb.teichmann at gmail.com  Fri Feb 13 05:39:22 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Fri, 13 Feb 2015 05:39:22 +0100
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <CAGRr6BEdLeoyH7QK=e4pYz8nNvxpKBHuyjBwVoh3CnHUAWd8yg@mail.gmail.com>
References: <CAK9R32TLOj=bEmSqxCrS_FmbUPFX276JEfbg6Djmte32KpSBHA@mail.gmail.com>
 <54DCF2AC.4020104@stoneleaf.us>
 <CAGRr6BEdLeoyH7QK=e4pYz8nNvxpKBHuyjBwVoh3CnHUAWd8yg@mail.gmail.com>
Message-ID: <CAK9R32Sq5Rg=E1hoAYfCbsLU5Drrp2EScNXNRjHsEih2b6Z7VQ@mail.gmail.com>

Hi Anthony, Hi list,

> Create the required inheriting metaclass on the fly:
>
> def multi_meta(*clss):
>     """A helper for inheriting from classes with different metaclasses.
>
>     Usage:
>         class C(multi_meta(A, B)):
>             "A class inheriting from classes A, B with different
> metaclasses."
>     """

That's actually exactly the code I am trying to avoid. This way, everyone
starts writing their own metaclass mixers, which all make strong
assumptions about the metaclasses being mixed.

Now there should be only one way of doing things in python, unfortunately
I'm not dutch so I don't see it. Maybe it would be a good idea to have
such a mixer in the standard library. It would then cache already generated
metaclasses. It would also ask the metaclasses on how they like to
be mixed.

Not before long, you will read recommendations like
"just always write class A(mixer(B, C)), because well, you never now
whether B or C have a metaclass". This is exactly why I would like
to put it up one more level, make it part of type.

Greetings

Martin

From stephen at xemacs.org  Fri Feb 13 05:40:48 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Fri, 13 Feb 2015 13:40:48 +0900
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
Message-ID: <87iof6mnjj.fsf@uwakimon.sk.tsukuba.ac.jp>

Donald Stufft writes:

 > Python often gets little improvements that on their own are not
 > major enhancements but when looked at cumulatively they add up to a
 > nicer to use language. An example would be set literals, set(["a",
 > "b"]) wasn't confusing nor was it particularly hard to use, however
 > being able to type {"a", "b"} is nice, slightly easier, and just
 > makes the language jsut a little bit better.

Yes, yes, and yes.

 > Similarly doing:
 > 
 >     new_dict = dict1.copy()
 >     new_dict.update(dict2)
 > 
 > Isn't confusing or particularly hard to use, however being able to
 > type that as new_dict = dict1 + dict2 is more succinct, cleaner,
 > and just a little bit nicer.

Yes, no, and no.

Succint?  Yes.  Just count characters or lines (but both are
deprecated practices in juding Python style).

Cleaner?  No.  The syntax is cleaner (way fewer explicit operations),
but the semantics are muddier for me.  At one time or another, at
least four different interpretations of "dict addition" have been
proposed already:

1.  item of left operand wins
    Ie, "add new keys and their values".
    This is the one I think of as "add", and it's most analogous to
    the obvious algorithm for "add-as-in-union" for sets.

2.  item of right operand wins
    Ie, "add new values, inserting new keys as needed".
    I think of this as "update", its per-item semantics is "replace",
    but it seems to be the favorite for getting "+" syntax.  Eh?

3.  keywise addition
    This is collections.Counter, although there are several plausible
    treatments of missing values (error, additive identity).

4.  keys duplication is an error
    Strictly speaking nobody has proposed this, but it is implicit in
    many of the posts that give an example of adding dictionaries with
    no duplicate keys and say, "it's obvious what d1 + d2 should be".
    Then there's the variant where duplicate keys with the same value
    is OK.

And in the "adding shopping dicts" example you could plausibly argue
for two more:

5.  keywise max of values
    So the first person to the refrigerator always has enough eggs to
    bake her cake, even if nobody else does.

6.  keywise min of values
    If you are walking to the store.

Well, maybe those two aren't very plausible.<wink />

Sure, I could internalize any given definition of "dict.__add__", but
I probably won't soon.  To make my own code readable (including
"clearly correct semantics in the application domain") to myself, I'll
use .add() and .update() methods, a merged() function, or where
there's a domain concept that is more precise, that name.

And finally, no.  I don't think that's nicer.  For me personally, it
clearly violates TOOWTDI.  For every new user it's definitely "one
more thing to learn".  And I don't see that the syntax itself is so
much nicer than

    def some_name_that_expresses_domain_semantics(*dicts):
        new_dict = {}
        for dict in dicts:
            new_dict.update(dict)
        return new_dict

    new_dict = some_name_that_expresses_domain_semantics(d1, d2, d3)

as to justify adding syntax where there is no obvious cross-domain
abstraction.


From lkb.teichmann at gmail.com  Fri Feb 13 05:43:47 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Fri, 13 Feb 2015 05:43:47 +0100
Subject: [Python-ideas] A (meta)class algebra
Message-ID: <CAK9R32TcAcaeAWAChTkM5qkw+k7OzkMMg+pt=2LW-w4+vZMs4g@mail.gmail.com>

Hi Thomas, Hi list,

> 1. More magic around combining metaclasses would make it harder to follow
> what's going on. Explicit (this has a metaclass which is a combination of
> two other metaclasses) is better than implicit (this has a metaclass which
> can combine itself with other metaclasses found from the MRO).

Well I disagree. If you follow this rule, everyone who uses a metaclass
needs to know how to combine it with another metaclass. This
leads to code like (I'm citing IPython.qt.console.console_widget):

class ConsoleWidget(MetaQObjectHasTraits('NewBase',
(LoggingConfigurable, QtGui.QWidget), {})):

which, in my opinion, is hard to read and grasp. Everyone has to
be taught on how to use those metaclass mixers.

> 2. I *want* it to be hard for people to write multiple-inheriting code with
> multiple metaclasses. That code is most likely going to be a nightmare for
> anyone else to understand, so I want the person creating it to stop and
> think if they can do it a different way, like using composition instead of
> inheritance.

I'm completely with you arguing that inheritance is overrated and
composition should be used more often. Yet, I actually think that
the above example is actually not such a bad idea, why should a
QWidget not be LoggingConfigurable, and have traits? Whatever
that might mean in detail, I think most programmers do get
what's going on. It's just that

class ConsoleWidget(LoggingConfigurable, QtGui.QWidget):

is much more readable. Is there magic going on in the background?
Well, to some point, yes. But it is explicit magic, the authors of
the metaclasses have to explicitly write an __add__ method to
do the metaclass combination. In this regard, it's actually more
explicit than the metaclass mixing function as above and
also mentioned by Anthony (how do I respond to two mails on a
mailing list at the same time?), which typically stuff together
metaclasses without thinking to much about them.

I also don't see how this is harder to refactor. If everyone has
to explicitly combine to metaclasses if needed, you will have
to change all that code if the details of combining the metaclasses
change. In my version, one would just have to touch the metaclass
code.

This becomes even worse if you want to add a metaclass to a class.
Then you have to change every class that inherits from it. Now you
say that code with multiple metaclasses is a bad thing in general,
but I don't agree. There are very simple examples where it makes
a lot of sense. Say, you want to write a class with traits that
implements an abstract base class. Wouldn't it be cool if you could
write

    class Spam(SpamABC, SomethingWithTraits):

instead of writing

    class Spam(SomethingWithTraits):
       ...
    SpamABC.register(Spam)

?

Greetings

Martin

From stephen at xemacs.org  Fri Feb 13 06:05:15 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Fri, 13 Feb 2015 14:05:15 +0900
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <mbjkks$cec$1@ger.gmane.org>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org>
 <20150212110929.1cc16d6c@anarchist.wooz.org>
 <mbjkks$cec$1@ger.gmane.org>
Message-ID: <87h9uqmmes.fsf@uwakimon.sk.tsukuba.ac.jp>

Terry Reedy writes:

 > I18n of existing code would be easier if Tools/i18n had a
 > stringmark.py script that would either mark all strings with _()
 > (or alternative)

This loses badly on your menu example, because you'll end up marking
all the command names.  More generally, I think "mark all" is almost
never going to work as desired in Python because there are so many
places where it's nontrivial to distinguish UI strings needing I18N
from "symbols" (eg, for has_attr() or as internal hash keys), since
Python doesn't have a symbol or bareword type.

 > or interactively let one say yes or no to each.

That's a good idea, although I'd say adding this capability to IDLE
itself would be better yet.  Eg, a command mark-this-string, and a
quick keyboard macro "mark-this-string; 2 * forward-next-string" would
be great in the menu example.


From chris.barker at noaa.gov  Fri Feb 13 05:22:01 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Thu, 12 Feb 2015 20:22:01 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150213024521.GJ2498@ando.pearwood.info>
References: <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
Message-ID: <CALGmxELbw0BttRAPBJysVASUgHVVHLd1ydMVsiihfF2cm2kfZg@mail.gmail.com>

On Thu, Feb 12, 2015 at 6:45 PM, Steven D'Aprano <steve at pearwood.info>
wrote:

> - dict(a,b,c,d) takes 6 characters more to write than a+b+c+d.
>

and dict(a,b)

is three times as many characters as:

a + b

- just sayin'

I'm curious what the aversion is to having an operator -- sure, it's not a
big deal, but then again there's very little cost, as well. I can't really
see a "trap" here. Sure there are multiple ways it could be defined, but
having it act like update() seems pretty explainable.

-Chris



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/a0f83b42/attachment-0001.html>

From greg.ewing at canterbury.ac.nz  Fri Feb 13 06:46:19 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Fri, 13 Feb 2015 18:46:19 +1300
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CALFfu7BS4nc-cRTPrZ2sAvN1S9=An673KE-+Xdtx_CVtergDuA@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <CALFfu7BS4nc-cRTPrZ2sAvN1S9=An673KE-+Xdtx_CVtergDuA@mail.gmail.com>
Message-ID: <54DD8FAB.5060405@canterbury.ac.nz>

Eric Snow wrote:
> However, the upside to
> the PEP 448 syntax is that merging could be done without requiring an
> additional intermediate dict.

It can also detect duplicate keywords and give you
a clear error message. The operator would at best
give a rather more generic error message and at
worst silently discard one of the duplicates.

-- 
Greg

From steve at pearwood.info  Fri Feb 13 07:21:00 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 13 Feb 2015 17:21:00 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <88C2AD85-4F6E-4137-9698-E517DC19C3DD@yahoo.com>
References: <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <88C2AD85-4F6E-4137-9698-E517DC19C3DD@yahoo.com>
Message-ID: <20150213062100.GM2498@ando.pearwood.info>

On Thu, Feb 12, 2015 at 07:52:09PM -0800, Andrew Barnert wrote:

> > - Surely this change is minor enough that it doesn't need a PEP? It just
> >  needs a patch and approval from a senior developer with commit 
> >  privileges.
> 
> I'm not sure about this. If you want to change MutableMapping.update 
> too, you'll be potentially breaking all kinds of existing classes that 
> claim to be a MutableMapping and override update with a method with a 
> now-incorrect signature.

Fair enough. It's a bigger change than I thought, but I still think it 
is worth doing.

-- 
Steve

From steve at pearwood.info  Fri Feb 13 07:19:05 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 13 Feb 2015 17:19:05 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <7262978700952906911@unknownmsgid>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
Message-ID: <20150213061904.GL2498@ando.pearwood.info>

On Thu, Feb 12, 2015 at 07:43:36PM -0800, Chris Barker - NOAA Federal wrote:
> > avoids any confusion over operators and having += duplicating the
> > update method.
> 
> += duplicates the extend method on lists.

Yes it does, and that sometimes causes confusion when people wonder why 
alist += blist is not *quite* the same as alist = alist + blist. It also 
leads to a quite ugly and unfortunate language wart with tuples:

py> t = ([], None)
py> t[0] += [1]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'tuple' object does not support item assignment
py> t
([1], None)

Try explaining to novices why this is not a bug.


> And it's really redundant for numbers, too:
> 
> x += y
> 
> x = x + y

Blame C for that. C defines so many ways to do more or less the same 
thing (x++ ++x x+=1 x=x+1) that a generation or three of programmers 
have come to consider it normal.


> So plenty of precedent.

Historical accident and backwards compatibility require that certain, 
hmmm, "dirty" is too strong a word, let's say slightly tarnished, design 
choices have to persist forever, or near enough to forever, but that's 
not a good reason for propagating the same choices into new 
functionality.

It is *unfortunate* that += works with lists and tuples because + works, 
not a feature to emulate. Python made the best of a bad deal with 
augmented assignments: a syntax which works fine in C doesn't *quite* 
work cleanly in Python, but demand for it lead to it being supported. 
The consequence is that every generation of Python programmers now need 
to learn for themselves that += on non-numeric types has surprising 
corner cases. Usually the hard way.


> And my experience with newbies (been teaching intro to python for a
> few years) is that they grab onto + for concatenating strings really
> quickly. And I have a hard time getting them to use other methods for
> building up strings.

Right. And that is a *bad thing*. They shouldn't be using + for 
concatenation except for the simplest cases.



-- 
Steve

From steve at pearwood.info  Fri Feb 13 07:30:03 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 13 Feb 2015 17:30:03 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <87iof6mnjj.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info> <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
 <87iof6mnjj.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <20150213063003.GN2498@ando.pearwood.info>

On Fri, Feb 13, 2015 at 01:40:48PM +0900, Stephen J. Turnbull wrote:
> Donald Stufft writes:

[...]
>  > however being able to
>  > type that as new_dict = dict1 + dict2 is more succinct, cleaner,
>  > and just a little bit nicer.
> 
> Yes, no, and no.
> 
> Succint?  Yes.  Just count characters or lines (but both are
> deprecated practices in juding Python style).
> 
> Cleaner?  No.  The syntax is cleaner (way fewer explicit operations),
> but the semantics are muddier for me.  At one time or another, at
> least four different interpretations of "dict addition" have been
> proposed already:
> 
> 1.  item of left operand wins
>     Ie, "add new keys and their values".
>     This is the one I think of as "add", and it's most analogous to
>     the obvious algorithm for "add-as-in-union" for sets.
> 
> 2.  item of right operand wins
>     Ie, "add new values, inserting new keys as needed".
>     I think of this as "update", its per-item semantics is "replace",
>     but it seems to be the favorite for getting "+" syntax.  Eh?
> 
> 3.  keywise addition
>     This is collections.Counter, although there are several plausible
>     treatments of missing values (error, additive identity).
> 
> 4.  keys duplication is an error
>     Strictly speaking nobody has proposed this, but it is implicit in
>     many of the posts that give an example of adding dictionaries with
>     no duplicate keys and say, "it's obvious what d1 + d2 should be".
>     Then there's the variant where duplicate keys with the same value
>     is OK.

Actually, there have been a few posts where people have suggested that 
duplicated keys should raise an exception. Including one that suggested 
it would be good for novices but bad for experienced coders.


> And in the "adding shopping dicts" example you could plausibly argue
> for two more:
> 
> 5.  keywise max of values
>     So the first person to the refrigerator always has enough eggs to
>     bake her cake, even if nobody else does.

That's the multiset model.

> 6.  keywise min of values
>     If you are walking to the store.

One more, which was suggested on Stackoverflow:

{'spam': 2, 'eggs': 1} + {'ham': 1, 'eggs': 3}
=> {'ham': 1, 'spam': 2, 'eggs': (1, 3)}



-- 
Steve

From steve at pearwood.info  Fri Feb 13 07:56:35 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 13 Feb 2015 17:56:35 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CALGmxELbw0BttRAPBJysVASUgHVVHLd1ydMVsiihfF2cm2kfZg@mail.gmail.com>
References: <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <CALGmxELbw0BttRAPBJysVASUgHVVHLd1ydMVsiihfF2cm2kfZg@mail.gmail.com>
Message-ID: <20150213065635.GO2498@ando.pearwood.info>

On Thu, Feb 12, 2015 at 08:22:01PM -0800, Chris Barker wrote:
> On Thu, Feb 12, 2015 at 6:45 PM, Steven D'Aprano <steve at pearwood.info>
> wrote:
> 
> > - dict(a,b,c,d) takes 6 characters more to write than a+b+c+d.
> >
> 
> and dict(a,b)
> 
> is three times as many characters as:
> 
> a + b
> 
> - just sayin'

And the difference between:

    default_settings + global_settings + user_settings

versus:

    dict(default_settings, global_settings, user_settings)

is only four characters. If you're using a, b, c as variable names in 
production, you probably have bigger problems than an extra four 
characters :-)


> I'm curious what the aversion is to having an operator -- sure, it's not a
> big deal, but then again there's very little cost, as well. I can't really
> see a "trap" here. Sure there are multiple ways it could be defined, but
> having it act like update() seems pretty explainable.

Each of these in isolation is a little thing, but enough little things 
make a big thing:

Duplicating functionality ("more than one way to do it") is not always 
wrong, but it's a little smelly (as in a code smell, or in this case, an 
API smell).

Some people here find + to be a plausible operator. I don't. I react to 
using + for things which aren't addition in much the same way that I 
expect you would react if I suggested we used ** as the operator. "Why 
on earth would you want to use ** it's nothing like exponentiation!"

As the experience of lists and tuples show, extending a syntax 
originally designed by C programmers to make numeric addition easier 
into non-numeric contests is fraught with odd corner cases, gotchas and 
language warts.

Operator overloading may be a code smell as often as it is a benefit. 
Google for "operator overloading considered harmful".

The + operator is a binary operator, which is a more limited interface 
than function/method calls. You can't write a.update(b, c, d, spam=23) 
using a single + call.

Operators are harder to search for than named functions. Especially + 
which is an operator that has specific meaning to most search engines.

Some languages may be able to optimize a + b + c + d to avoid making and 
throwing away the intermediate dicts, but in general Python cannot. So 
it is going to fall into the same trap as list addition does. While it 
is not common for this to lead to severe performance degradation, when 
it does happen the cost is *so severe* that we should think long and 
hard before adding any more O(N**2) booby traps into the language.



-- 
Steve

From abarnert at yahoo.com  Fri Feb 13 08:28:07 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Thu, 12 Feb 2015 23:28:07 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150213065635.GO2498@ando.pearwood.info>
References: <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <CALGmxELbw0BttRAPBJysVASUgHVVHLd1ydMVsiihfF2cm2kfZg@mail.gmail.com>
 <20150213065635.GO2498@ando.pearwood.info>
Message-ID: <594DE08B-457A-4EF1-8574-10EE49996F5D@yahoo.com>

On Feb 12, 2015, at 22:56, Steven D'Aprano <steve at pearwood.info> wrote:

> Some people here find + to be a plausible operator. I don't. I react to 
> using + for things which aren't addition in much the same way that I 
> expect you would react if I suggested we used ** as the operator. "Why 
> on earth would you want to use ** it's nothing like exponentiation!"

I'm not sure ** was the best example, because it also means both mapping unpacking and kwargs, both of which are _more_ relevant here than addition, not less. And ** for exponentiation is itself a little weird; you're just used to it at this point, if you don't find it weird anymore, that kind of argues against your point. Also, people have already suggested ridiculous things like << and ^ here, so you don't need to invent other ridiculous possibilities in the first place...

But anyway, I take your point, and I agree that either constructor and update with multiple args, or expanded unpacking, are better solutions than adding + if the problems can be worked out.

From wichert at wiggy.net  Fri Feb 13 09:22:24 2015
From: wichert at wiggy.net (Wichert Akkerman)
Date: Fri, 13 Feb 2015 09:22:24 +0100
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <mbjesi$lb5$1@ger.gmane.org>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <mbjesi$lb5$1@ger.gmane.org>
Message-ID: <CC7084D7-DE2A-415D-8AA4-8B4CE93553F8@wiggy.net>


> On 13 Feb 2015, at 01:01, Terry Reedy <tjreedy at udel.edu> wrote:
> 
> On 2/11/2015 10:33 PM, Terry Reedy wrote:
> 
>> A possible solution would be to give 3.x str a __pos__ or __neg__
>> method, so +'python' or -'python' would mean the translation of 'python'
>> (the snake).  This would be even *less* intrusive and easier to write
>> than _('python').
>> 
>> Since () has equal precedence with . while + and - are lower, we would
>> have the following equivalences for mixed expressions with '.':
>> 
>> _(snake.format()) == -snake.format()  # format, then translate
>>                # or -(snake.lower())
>> _(snake).format() == (-snake).format()  # translate, then format
> 
> The core idea of this proposal is to add a single prefix symbol as an alternative to _(...).  Summarizing the alternate proposals:
> 
> Zero Piraeus suggests that ~ (.__invert__ (normally bitwise)) would be better.  I agree.
> 
> M.-A. Lemburg points out that neither of
>  label = +'I like' + +'Python'
>  label = -'I like' + -'Python'
> do not 'read well'.  I do not know how common such concatentions are, but I agree that they do not look good.

Please note that concatenating translatable strings like that is a really bad practice, and will lead to impossible situations. If you translate ?I like? and ?Python? separately, and then concatenate the results you are likely to get something that is quite awful, if not impossible, in many languages.

There are also other areas in which Python i18n support is currently lacking, and which would only get worse with this proposal: support for message contexts and plural forms. Both are critical for any non-trivial application. For what?s worth I am toying with the idea of spending pycon sprint time on that topic (unless the pyramid folks manage to grab all my attention again!).

Wichert.

From encukou at gmail.com  Fri Feb 13 10:26:41 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Fri, 13 Feb 2015 10:26:41 +0100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <88C2AD85-4F6E-4137-9698-E517DC19C3DD@yahoo.com>
References: <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <88C2AD85-4F6E-4137-9698-E517DC19C3DD@yahoo.com>
Message-ID: <CA+=+wqD=YBvLG84=PLP48-N1b0Z8QmJNiBs_XL4uh7U3pk1yLw@mail.gmail.com>

On Fri, Feb 13, 2015 at 4:52 AM, Andrew Barnert
<abarnert at yahoo.com.dmarc.invalid> wrote:
> On Feb 12, 2015, at 18:45, Steven D'Aprano <steve at pearwood.info> wrote:
>
>> On Thu, Feb 12, 2015 at 07:25:24PM -0700, Eric Snow wrote:
>>
>>> Or just make "dict(a, b, c)" work.
>>
>> I've come to the same conclusion.

>> - Surely this change is minor enough that it doesn't need a PEP? It just
>>  needs a patch and approval from a senior developer with commit
>>  privileges.
>
> I'm not sure about this. If you want to change MutableMapping.update too, you'll be potentially breaking all kinds of existing classes that claim to be a MutableMapping and override update with a method with a now-incorrect signature.

Is there a mechanism to evolve ABCs? Maybe there should be, something like:

- For a few Python versions, say that the new behavior is preferred
but can't be relied upon. Update the MutableMapping default
implementation (this is backwards compatible).
- Possibly, later, have MutableMapping warn when a class with the old
update signature is registered. (With the signature improvements in
py3, this should be possible for 99% cases.) Or just rely on linters
to implement a check.
- Finally switch over completely, so callers can rely on the new
MutableMapping.update

From abarnert at yahoo.com  Fri Feb 13 11:02:15 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Fri, 13 Feb 2015 02:02:15 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CA+=+wqD=YBvLG84=PLP48-N1b0Z8QmJNiBs_XL4uh7U3pk1yLw@mail.gmail.com>
References: <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <88C2AD85-4F6E-4137-9698-E517DC19C3DD@yahoo.com>
 <CA+=+wqD=YBvLG84=PLP48-N1b0Z8QmJNiBs_XL4uh7U3pk1yLw@mail.gmail.com>
Message-ID: <46A3A068-A2B5-46F8-A8C5-32BACC1EB55A@yahoo.com>

On Feb 13, 2015, at 1:26, Petr Viktorin <encukou at gmail.com> wrote:

> On Fri, Feb 13, 2015 at 4:52 AM, Andrew Barnert
> <abarnert at yahoo.com.dmarc.invalid> wrote:
>> On Feb 12, 2015, at 18:45, Steven D'Aprano <steve at pearwood.info> wrote:
>> 
>>> On Thu, Feb 12, 2015 at 07:25:24PM -0700, Eric Snow wrote:
>>> 
>>>> Or just make "dict(a, b, c)" work.
>>> 
>>> I've come to the same conclusion.
> 
>>> - Surely this change is minor enough that it doesn't need a PEP? It just
>>> needs a patch and approval from a senior developer with commit
>>> privileges.
>> 
>> I'm not sure about this. If you want to change MutableMapping.update too, you'll be potentially breaking all kinds of existing classes that claim to be a MutableMapping and override update with a method with a now-incorrect signature.
> 
> Is there a mechanism to evolve ABCs? Maybe there should be, something like:
> 
> - For a few Python versions, say that the new behavior is preferred
> but can't be relied upon. Update the MutableMapping default
> implementation (this is backwards compatible).
> - Possibly, later, have MutableMapping warn when a class with the old
> update signature is registered. (With the signature improvements in
> py3, this should be possible for 99% cases.) Or just rely on linters
> to implement a check.
> - Finally switch over completely, so callers can rely on the new
> MutableMapping.update

Well, currently, ABCs don't test signatures at all, just the presence of a non-abstract method. So you'd have to change that first, which seems like a pretty big change.

And the hard part is the design, not the coding (as, I suspect, with the abc module in the first place). What do you do to mark an @abstractmethod as "check signature against the new version, and, if not, warn if it's compatible with the previous version, raise otherwise"? For that matter, what is the exact rule for "compatible with the new version" given that the new version is essentially just (self, *args, **kwargs)? How do you specify that rule declaratively (or so we add a __check__(subclass_sig) method to abstract methods and make you do it imperatively)? And so on.

It might be worth seeing what other languages/frameworks do to evolve signatures on their abc/interface/protocol types. I'm not sure there's a good answer to find (unless you want monstrosities like IXMLThingy4::ParseEx3 as in COM), but I wouldn't want to assume that without looking around first.

From stefan_ml at behnel.de  Fri Feb 13 11:11:59 2015
From: stefan_ml at behnel.de (Stefan Behnel)
Date: Fri, 13 Feb 2015 11:11:59 +0100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DD1A0B.9000202@canterbury.ac.nz>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <54DD1A0B.9000202@canterbury.ac.nz>
Message-ID: <mbkilf$mie$1@ger.gmane.org>

Greg Ewing schrieb am 12.02.2015 um 22:24:
> my main objection to an asymmetrical + for dicts
> is that it would be difficult to remember which way
> it went.
> 
> Analogy with bytes + sequence would suggest that the
> left operand wins. Analogy with dict.update would
> suggest that the right operand winds.

Do you really think that's an issue? Arithmetic expressions are always
evaluated from left to right. It thus seems obvious to me what gets created
first, and what gets added afterwards (and overwrites what's there).

Stefan



From mortenzdk at gmail.com  Fri Feb 13 11:19:58 2015
From: mortenzdk at gmail.com (Morten Z)
Date: Fri, 13 Feb 2015 11:19:58 +0100
Subject: [Python-ideas] Namespace creation with syntax short form
Message-ID: <CAOtYm-Hx+9K-w3PzxxOtoQcE9OQOWvg9mYVfc3diBnAWxAkYGA@mail.gmail.com>

Namespaces are widely used in Python, but it appear that creating a
namespace
has no short form, like [] for list or {} for dict, but requires the lengthy
types.SimpleNamespace, with prior import of types.

So an idea is to make a short form with:

    ns = (answer= 42)

Which resembles the types.SimpleNamespace(answer= 42), but leaves out a lot
of
typing.

Syntax for an empty namespace should then be determined.

Best regards,
MortenZdk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150213/d6e59670/attachment.html>

From steve at pearwood.info  Fri Feb 13 11:42:00 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Fri, 13 Feb 2015 21:42:00 +1100
Subject: [Python-ideas] Namespace creation with syntax short form
In-Reply-To: <CAOtYm-Hx+9K-w3PzxxOtoQcE9OQOWvg9mYVfc3diBnAWxAkYGA@mail.gmail.com>
References: <CAOtYm-Hx+9K-w3PzxxOtoQcE9OQOWvg9mYVfc3diBnAWxAkYGA@mail.gmail.com>
Message-ID: <20150213104159.GQ2498@ando.pearwood.info>

On Fri, Feb 13, 2015 at 11:19:58AM +0100, Morten Z wrote:
> Namespaces are widely used in Python, but it appear that creating a
> namespace
> has no short form, like [] for list or {} for dict, but requires the lengthy
> types.SimpleNamespace, with prior import of types.

from types import SimpleNamespace as NS
ns = NS(answer=42)

Python did without a SimpleNamespace type for 20+ years. I don't think 
it is important enough to deserve dedicated syntax, especially yet 
another overloading of parentheses. Not everything needs to be syntax.


-- 
Steve

From encukou at gmail.com  Fri Feb 13 11:55:37 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Fri, 13 Feb 2015 11:55:37 +0100
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <CAK9R32TcAcaeAWAChTkM5qkw+k7OzkMMg+pt=2LW-w4+vZMs4g@mail.gmail.com>
References: <CAK9R32TcAcaeAWAChTkM5qkw+k7OzkMMg+pt=2LW-w4+vZMs4g@mail.gmail.com>
Message-ID: <CA+=+wqCoPo8WH9SqOzfqps6Jr66zKs+d_nqXOCujkpha7Ln3_w@mail.gmail.com>

On Fri, Feb 13, 2015 at 5:43 AM, Martin Teichmann
<lkb.teichmann at gmail.com> wrote:
> Hi Thomas, Hi list,
>
>> 1. More magic around combining metaclasses would make it harder to follow
>> what's going on. Explicit (this has a metaclass which is a combination of
>> two other metaclasses) is better than implicit (this has a metaclass which
>> can combine itself with other metaclasses found from the MRO).
>
> Well I disagree. If you follow this rule, everyone who uses a metaclass
> needs to know how to combine it with another metaclass. This
> leads to code like (I'm citing IPython.qt.console.console_widget):
>
> class ConsoleWidget(MetaQObjectHasTraits('NewBase',
> (LoggingConfigurable, QtGui.QWidget), {})):
>
> which, in my opinion, is hard to read and grasp. Everyone has to
> be taught on how to use those metaclass mixers.
>
>> 2. I *want* it to be hard for people to write multiple-inheriting code with
>> multiple metaclasses. That code is most likely going to be a nightmare for
>> anyone else to understand, so I want the person creating it to stop and
>> think if they can do it a different way, like using composition instead of
>> inheritance.
>
> I'm completely with you arguing that inheritance is overrated and
> composition should be used more often. Yet, I actually think that
> the above example is actually not such a bad idea, why should a
> QWidget not be LoggingConfigurable, and have traits? Whatever
> that might mean in detail, I think most programmers do get
> what's going on. It's just that
>
> class ConsoleWidget(LoggingConfigurable, QtGui.QWidget):
>
> is much more readable. Is there magic going on in the background?
> Well, to some point, yes. But it is explicit magic, the authors of
> the metaclasses have to explicitly write an __add__ method to
> do the metaclass combination. In this regard, it's actually more
> explicit than the metaclass mixing function as above and
> also mentioned by Anthony (how do I respond to two mails on a
> mailing list at the same time?), which typically stuff together
> metaclasses without thinking to much about them.
>
> I also don't see how this is harder to refactor. If everyone has
> to explicitly combine to metaclasses if needed, you will have
> to change all that code if the details of combining the metaclasses
> change. In my version, one would just have to touch the metaclass
> code.
>
> This becomes even worse if you want to add a metaclass to a class.
> Then you have to change every class that inherits from it. Now you
> say that code with multiple metaclasses is a bad thing in general,
> but I don't agree. There are very simple examples where it makes
> a lot of sense. Say, you want to write a class with traits that
> implements an abstract base class. Wouldn't it be cool if you could
> write

This is the one thing keeping metaclasses from being truly general ?
using them limits inheritance.

The problem is that metaclasses are too powerful.
They can change the C-level type; you can't automatically combine two
metaclasses that do this.
They can change the class statement's namespace (via __prepare__).
Again, you can't automatically combine these.
In these cases you really need knowledge of *both* metaclasses to
determine if you're mixing them correctly. It can't really be done in
a method on one of the classes. (And btw, __add__ is a terrible name
for the method. If it's meant to be called implicitly by Python the
"+" sugar is unnecessary. Something like __metaclass_combine__ would
be a better name.)

So, I argue that automatically combining metaclasses makes sense if at
least one of the metaclasses is "simple": it only mutates the class's
attributes, after the class is created. In other words, it overrides
__init__ but not __new__ or __prepare__. Such metaclasses are very
useful ? registrars, ORM tables, etc. The problem that limits their
adoption is that they can't be freely combined.
Similar things can be done by decorators, the problem being that the
decorator needs to be repeated on every subclass. (But this may not be
a huge problem: in many "Registrar" cases, explicit is better than
implicit.)

Anyway, perhaps "simple" metaclasses can be combined automatically? Or
there could be an instantiation hook to enable these classes without
limiting subclassing?

From encukou at gmail.com  Fri Feb 13 12:08:46 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Fri, 13 Feb 2015 12:08:46 +0100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <46A3A068-A2B5-46F8-A8C5-32BACC1EB55A@yahoo.com>
References: <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <88C2AD85-4F6E-4137-9698-E517DC19C3DD@yahoo.com>
 <CA+=+wqD=YBvLG84=PLP48-N1b0Z8QmJNiBs_XL4uh7U3pk1yLw@mail.gmail.com>
 <46A3A068-A2B5-46F8-A8C5-32BACC1EB55A@yahoo.com>
Message-ID: <CA+=+wqAjhyj=Ud+jhKR4FRKmdo1HkHU-rB7gy1K3ud8hWEwtNA@mail.gmail.com>

On Fri, Feb 13, 2015 at 11:02 AM, Andrew Barnert <abarnert at yahoo.com> wrote:
> On Feb 13, 2015, at 1:26, Petr Viktorin <encukou at gmail.com> wrote:
>
>> On Fri, Feb 13, 2015 at 4:52 AM, Andrew Barnert
>> <abarnert at yahoo.com.dmarc.invalid> wrote:
>>> On Feb 12, 2015, at 18:45, Steven D'Aprano <steve at pearwood.info> wrote:
>>>
>>>> On Thu, Feb 12, 2015 at 07:25:24PM -0700, Eric Snow wrote:
>>>>
>>>>> Or just make "dict(a, b, c)" work.
>>>>
>>>> I've come to the same conclusion.
>>
>>>> - Surely this change is minor enough that it doesn't need a PEP? It just
>>>> needs a patch and approval from a senior developer with commit
>>>> privileges.
>>>
>>> I'm not sure about this. If you want to change MutableMapping.update too, you'll be potentially breaking all kinds of existing classes that claim to be a MutableMapping and override update with a method with a now-incorrect signature.
>>
>> Is there a mechanism to evolve ABCs? Maybe there should be, something like:
>>
>> - For a few Python versions, say that the new behavior is preferred
>> but can't be relied upon. Update the MutableMapping default
>> implementation (this is backwards compatible).
>> - Possibly, later, have MutableMapping warn when a class with the old
>> update signature is registered. (With the signature improvements in
>> py3, this should be possible for 99% cases.) Or just rely on linters
>> to implement a check.
>> - Finally switch over completely, so callers can rely on the new
>> MutableMapping.update
>
> Well, currently, ABCs don't test signatures at all, just the presence of a non-abstract method. So you'd have to change that first, which seems like a pretty big change.
>
> And the hard part is the design, not the coding (as, I suspect, with the abc module in the first place). What do you do to mark an @abstractmethod as "check signature against the new version, and, if not, warn if it's compatible with the previous version, raise otherwise"? For that matter, what is the exact rule for "compatible with the new version" given that the new version is essentially just (self, *args, **kwargs)? How do you specify that rule declaratively (or so we add a __check__(subclass_sig) method to abstract methods and make you do it imperatively)? And so on.
>
> It might be worth seeing what other languages/frameworks do to evolve signatures on their abc/interface/protocol types. I'm not sure there's a good answer to find (unless you want monstrosities like IXMLThingy4::ParseEx3 as in COM), but I wouldn't want to assume that without looking around first.

Well, by "mechanism" I meant guidelines on how it should work, not a
general implementation.
I'd just special-case MutableMapping now, and if it later turns out to
be useful elsewhere, figure a way to do it declaratively. I'm always
wary of designing a general mechanism from one example.

I propose the exact rule for "compatible with the new version" to be
"the signature contains a VAR_POSITIONAL argument", with the check
being skipped if the signature is unavailable. It's simple, only has
false negatives if the signature is wrong, and false positives if
varargs do something else (for this case there should be a warning in
docs/release notes).

From lkb.teichmann at gmail.com  Fri Feb 13 12:13:49 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Fri, 13 Feb 2015 12:13:49 +0100
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <CA+=+wqCoPo8WH9SqOzfqps6Jr66zKs+d_nqXOCujkpha7Ln3_w@mail.gmail.com>
References: <CAK9R32TcAcaeAWAChTkM5qkw+k7OzkMMg+pt=2LW-w4+vZMs4g@mail.gmail.com>
 <CA+=+wqCoPo8WH9SqOzfqps6Jr66zKs+d_nqXOCujkpha7Ln3_w@mail.gmail.com>
Message-ID: <CAK9R32QgbyiGFCCKQNX_CfEjgAFXQaEkEAAEXDwHcm6gkDJE2A@mail.gmail.com>

Hi Petr, Hi list,

> This is the one thing keeping metaclasses from being truly general ?
> using them limits inheritance.

I agree. That was why I am proposing a solution to this problem.

> The problem is that metaclasses are too powerful.
> They can change the C-level type; you can't automatically combine two
> metaclasses that do this.

This is exactly the problem my proposal solves. In my proposal,
metaclasses will NOT be combined automatically, but the authors
of the metaclasses have to explicitly write a method that does the
job. If that's not possible, the method is free to fail, like raising
an error (if one follows my idea with __add__, then it should
return NotImplemented)

> (And btw, __add__ is a terrible name
> for the method. If it's meant to be called implicitly by Python the
> "+" sugar is unnecessary. Something like __metaclass_combine__ would
> be a better name.)

I'm fine with any name. I just thought it to be useful that the entire
machinery with __add__ and __radd__ are already there.

> Anyway, perhaps "simple" metaclasses can be combined automatically? Or
> there could be an instantiation hook to enable these classes without
> limiting subclassing?

well, my proposal is such a hook.

Greetings

Martin

From lkb.teichmann at gmail.com  Fri Feb 13 12:51:59 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Fri, 13 Feb 2015 12:51:59 +0100
Subject: [Python-ideas] A (meta)class algebra
Message-ID: <CAK9R32RL2vP1uF+BR7XUTaaVBekMyvnED=i63XAmwjthTKf4tw@mail.gmail.com>

Hi again,

let me elaborate a bit more about the changes I am proposing:
once my idea is in place, it would, for example, be possible to
have a metaclass that knows about other metaclasses.

Take the example of a metaclass like the one used by traits,
let's call it MetaTraits. Then one would be able to register
another class, for example MetaABC, as such:

    MetaTraits.registerSimpleClass(MetaABC)

this would mean that MetaTraits and MetaABC are compatible
to the point that they can be seamlessly combined. Another
metaclass, ComplicatedMeta, may be added like this:

   class CombinedMeta(ComplicatedMeta, MetaTraits):
        # add some magic here
    MetaTraits.registerClass(CombinedMeta)

an implementation of MetaTraits follows:

class MetaTraits(type):
    # add the actual traits magic

    compatible_classes = {}

    @classmethod
    def __add__(cls, other):
        ret = super().__add__(other)
        if ret is not NotImplemented:
            return ret
        return cls.compatible_classes.get(other, NotImplemented)

    @classmethod
    def registerSimpleClass(cls, other):
        class Combined(cls, other):
            pass
        cls.compatible_classes[other] = Combined

    @classmethod
    def registerClass(cls, other):
        assert cls in other.__bases__
        for b in other.__bases__:
            if cls is not b:
                cls.compatible_classes[b] = other

From ncoghlan at gmail.com  Fri Feb 13 12:59:34 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 13 Feb 2015 21:59:34 +1000
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <CA+=+wqCoPo8WH9SqOzfqps6Jr66zKs+d_nqXOCujkpha7Ln3_w@mail.gmail.com>
References: <CAK9R32TcAcaeAWAChTkM5qkw+k7OzkMMg+pt=2LW-w4+vZMs4g@mail.gmail.com>
 <CA+=+wqCoPo8WH9SqOzfqps6Jr66zKs+d_nqXOCujkpha7Ln3_w@mail.gmail.com>
Message-ID: <CADiSq7dBUzo4Yt9DT6B1KNjJ6s4azHw-Xv0ZYXgNATtfo5nK_A@mail.gmail.com>

On 13 February 2015 at 20:55, Petr Viktorin <encukou at gmail.com> wrote:
> On Fri, Feb 13, 2015 at 5:43 AM, Martin Teichmann
>> This becomes even worse if you want to add a metaclass to a class.
>> Then you have to change every class that inherits from it. Now you
>> say that code with multiple metaclasses is a bad thing in general,
>> but I don't agree. There are very simple examples where it makes
>> a lot of sense. Say, you want to write a class with traits that
>> implements an abstract base class. Wouldn't it be cool if you could
>> write
>
> This is the one thing keeping metaclasses from being truly general ?
> using them limits inheritance.
>
> The problem is that metaclasses are too powerful.

FWIW, that's why http://legacy.python.org/dev/peps/pep-0422/ exists -
it aims to standardise Zope's notion of "class initialisers" so you
can have definition time behaviour that gets inherited by subclasses
(unlike explicit class decorators), without running the risk of
introducing metaclass conflicts, and without making the metaclass
system even more complicated.

Then the cases that can be safely combined with other metaclasses
would merely stop using a custom metaclass on 3.5+, and instead rely
entirely on having an appropriate base class.

The last round of reviews (back before 3.4 was released) raised a
bunch of interesting questions/challenges, and then somebody (*cough*)
decided it would be a good idea to go spend more time helping with
PyPI ecosystem wrangling instead :)

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From __peter__ at web.de  Fri Feb 13 13:05:24 2015
From: __peter__ at web.de (Peter Otten)
Date: Fri, 13 Feb 2015 13:05:24 +0100
Subject: [Python-ideas] Namespace creation with syntax short form
References: <CAOtYm-Hx+9K-w3PzxxOtoQcE9OQOWvg9mYVfc3diBnAWxAkYGA@mail.gmail.com>
 <20150213104159.GQ2498@ando.pearwood.info>
Message-ID: <mbkpa6$63l$1@ger.gmane.org>

Steven D'Aprano wrote:

> On Fri, Feb 13, 2015 at 11:19:58AM +0100, Morten Z wrote:
>> Namespaces are widely used in Python, but it appear that creating a
>> namespace
>> has no short form, like [] for list or {} for dict, but requires the
>> lengthy types.SimpleNamespace, with prior import of types.
> 
> from types import SimpleNamespace as NS
> ns = NS(answer=42)
> 
> Python did without a SimpleNamespace type for 20+ years. I don't think
> it is important enough to deserve dedicated syntax, especially yet
> another overloading of parentheses. Not everything needs to be syntax.

If there were syntactic support for on-the-fly namespace "packing" and 
"unpacking" that could have a significant impact on coding style, similar to 
that of named arguments.

Functions often start out returning a tuple and are later modified to return 
a NamedTuple. For backward compatibility reasons you are either stuck with 
the original data or you need a workaround like os.stat_result which has 
more attributes than items. "Named (un)packing" could avoid that.

>>> def f():
...    return a:1, b:2, c:3
>>> f()
(a:1, b:2, c:3)
>>> :c, :b, :a = f() # order doesn't matter
>>> :a, :c = f()     # we don't care about b
>>> a                # by default bound name matches attribute name
1
>>> x:a, y:c = f()   # we want a different name
>>> x, y
(1, 3)



From abarnert at yahoo.com  Fri Feb 13 13:09:10 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Fri, 13 Feb 2015 04:09:10 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CA+=+wqAjhyj=Ud+jhKR4FRKmdo1HkHU-rB7gy1K3ud8hWEwtNA@mail.gmail.com>
References: <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <88C2AD85-4F6E-4137-9698-E517DC19C3DD@yahoo.com>
 <CA+=+wqD=YBvLG84=PLP48-N1b0Z8QmJNiBs_XL4uh7U3pk1yLw@mail.gmail.com>
 <46A3A068-A2B5-46F8-A8C5-32BACC1EB55A@yahoo.com>
 <CA+=+wqAjhyj=Ud+jhKR4FRKmdo1HkHU-rB7gy1K3ud8hWEwtNA@mail.gmail.com>
Message-ID: <9325E147-9019-43FC-B1CB-E3BEF63E9ACB@yahoo.com>

On Feb 13, 2015, at 3:08, Petr Viktorin <encukou at gmail.com> wrote:

> On Fri, Feb 13, 2015 at 11:02 AM, Andrew Barnert <abarnert at yahoo.com> wrote:
>> On Feb 13, 2015, at 1:26, Petr Viktorin <encukou at gmail.com> wrote:
>> 
>>> On Fri, Feb 13, 2015 at 4:52 AM, Andrew Barnert
>>> <abarnert at yahoo.com.dmarc.invalid> wrote:
>>>> On Feb 12, 2015, at 18:45, Steven D'Aprano <steve at pearwood.info> wrote:
>>>> 
>>>>> On Thu, Feb 12, 2015 at 07:25:24PM -0700, Eric Snow wrote:
>>>>> 
>>>>>> Or just make "dict(a, b, c)" work.
>>>>> 
>>>>> I've come to the same conclusion.
>>> 
>>>>> - Surely this change is minor enough that it doesn't need a PEP? It just
>>>>> needs a patch and approval from a senior developer with commit
>>>>> privileges.
>>>> 
>>>> I'm not sure about this. If you want to change MutableMapping.update too, you'll be potentially breaking all kinds of existing classes that claim to be a MutableMapping and override update with a method with a now-incorrect signature.
>>> 
>>> Is there a mechanism to evolve ABCs? Maybe there should be, something like:
>>> 
>>> - For a few Python versions, say that the new behavior is preferred
>>> but can't be relied upon. Update the MutableMapping default
>>> implementation (this is backwards compatible).
>>> - Possibly, later, have MutableMapping warn when a class with the old
>>> update signature is registered. (With the signature improvements in
>>> py3, this should be possible for 99% cases.) Or just rely on linters
>>> to implement a check.
>>> - Finally switch over completely, so callers can rely on the new
>>> MutableMapping.update
>> 
>> Well, currently, ABCs don't test signatures at all, just the presence of a non-abstract method. So you'd have to change that first, which seems like a pretty big change.
>> 
>> And the hard part is the design, not the coding (as, I suspect, with the abc module in the first place). What do you do to mark an @abstractmethod as "check signature against the new version, and, if not, warn if it's compatible with the previous version, raise otherwise"? For that matter, what is the exact rule for "compatible with the new version" given that the new version is essentially just (self, *args, **kwargs)? How do you specify that rule declaratively (or so we add a __check__(subclass_sig) method to abstract methods and make you do it imperatively)? And so on.
>> 
>> It might be worth seeing what other languages/frameworks do to evolve signatures on their abc/interface/protocol types. I'm not sure there's a good answer to find (unless you want monstrosities like IXMLThingy4::ParseEx3 as in COM), but I wouldn't want to assume that without looking around first.
> 
> Well, by "mechanism" I meant guidelines on how it should work, not a
> general implementation.
> I'd just special-case MutableMapping now, and if it later turns out to
> be useful elsewhere, figure a way to do it declaratively. I'm always
> wary of designing a general mechanism from one example.

I guess this means you need a MutableMappingMeta that inherits ABCMeta and does the extra check on update after calling the super checks on abstract methods? Should be pretty simple (and it's not a problem that update isn't abstract), if a bit ugly.

> I propose the exact rule for "compatible with the new version" to be
> "the signature contains a VAR_POSITIONAL argument", with the check
> being skipped if the signature is unavailable. It's simple, only has
> false negatives if the signature is wrong, and false positives if
> varargs do something else (for this case there should be a warning in
> docs/release notes).

Yeah, if you're special-casing it and coding it imperatively, it's pretty simple.

But shouldn't it also require a var kw param? After all, the signature is supposed to be (self, *iterables_or_mappings, **kwpairs).

From abarnert at yahoo.com  Fri Feb 13 13:40:42 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Fri, 13 Feb 2015 04:40:42 -0800
Subject: [Python-ideas] Namespace creation with syntax short form
In-Reply-To: <mbkpa6$63l$1@ger.gmane.org>
References: <CAOtYm-Hx+9K-w3PzxxOtoQcE9OQOWvg9mYVfc3diBnAWxAkYGA@mail.gmail.com>
 <20150213104159.GQ2498@ando.pearwood.info> <mbkpa6$63l$1@ger.gmane.org>
Message-ID: <D920348D-B937-4738-9FC8-39C211B1A143@yahoo.com>

On Feb 13, 2015, at 4:05, Peter Otten <__peter__ at web.de> wrote:

> Steven D'Aprano wrote:
> 
>> On Fri, Feb 13, 2015 at 11:19:58AM +0100, Morten Z wrote:
>>> Namespaces are widely used in Python, but it appear that creating a
>>> namespace
>>> has no short form, like [] for list or {} for dict, but requires the
>>> lengthy types.SimpleNamespace, with prior import of types.
>> 
>> from types import SimpleNamespace as NS
>> ns = NS(answer=42)
>> 
>> Python did without a SimpleNamespace type for 20+ years. I don't think
>> it is important enough to deserve dedicated syntax, especially yet
>> another overloading of parentheses. Not everything needs to be syntax.
> 
> If there were syntactic support for on-the-fly namespace "packing" and 
> "unpacking" that could have a significant impact on coding style, similar to 
> that of named arguments.
> 
> Functions often start out returning a tuple and are later modified to return 
> a NamedTuple. For backward compatibility reasons you are either stuck with 
> the original data or you need a workaround like os.stat_result which has 
> more attributes than items.

There's really nothing terrible about that workaround, except that it's significantly harder to implement in Python (with namedtuple) than in C (with structseq).

More importantly, I don't see how named unpacking helps that. If your function originally returns a 3-tuple, and people write code that expects a 3-tuple, how do you change it to return a 3-tuple that's also a 4-namespace under your proposal?

Meanwhile:

> "Named (un)packing" could avoid that.
> 
>>>> def f():
> ...    return a:1, b:2, c:3
>>>> f()
> (a:1, b:2, c:3)

So what type is that? Is there a single type for all "named packing tuples" (like SimpleNamespace), or a separate type for each one (like namedtuple), or some kind of magic that makes issubclass(a, b) true if b has the same names as a plus optionally more names at the end, or ...?

How does this interact with existing unpacking? Can I extract the values as a tuple, e.g., "x, *y = f()" to get x=1 and y=(2, 3)? Or the key-value pairs with "**kw = f()"? Or maybe ::kw to get them as a namespace instead of a dict? Can I mix them, with "x, *y, z:c" to get 1, (2,), and 3 (and likewise with **kw or ::ns)?

Does this only work with magic namespaces, or with anything with attributes? For example, if I have a Point class, "does x:x = pt" do the same thing as "x = pt.x", or does that only work if I don't use a Point class and instead pass around (x:3, y:4) with no type information?

Since the parens are clearly optional, does that mean that "a:3" is a single-named namespace object, or do you need the comma as with tuples? (I think the former may make parsing ambiguous, but I haven't thought it through...)

>>>> :c, :b, :a = f() # order doesn't matter
>>>> :a, :c = f()     # we don't care about b
>>>> a                # by default bound name matches attribute name

Why do you need, or want, that last comment? With similar features, you have to be explicit; e.g., if you want to pass a local variable named a to a keyword parameter named a, you have to write a=a, not just =a. How is this case different?

> 1
>>>> x:a, y:c = f()   # we want a different name
>>>> x, y
> (1, 3)

From encukou at gmail.com  Fri Feb 13 13:50:55 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Fri, 13 Feb 2015 13:50:55 +0100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <9325E147-9019-43FC-B1CB-E3BEF63E9ACB@yahoo.com>
References: <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <88C2AD85-4F6E-4137-9698-E517DC19C3DD@yahoo.com>
 <CA+=+wqD=YBvLG84=PLP48-N1b0Z8QmJNiBs_XL4uh7U3pk1yLw@mail.gmail.com>
 <46A3A068-A2B5-46F8-A8C5-32BACC1EB55A@yahoo.com>
 <CA+=+wqAjhyj=Ud+jhKR4FRKmdo1HkHU-rB7gy1K3ud8hWEwtNA@mail.gmail.com>
 <9325E147-9019-43FC-B1CB-E3BEF63E9ACB@yahoo.com>
Message-ID: <CA+=+wqA6=ugZiZspDK1vyjex8dP2F4auY-frsRNfPVB791akvw@mail.gmail.com>

On Fri, Feb 13, 2015 at 1:09 PM, Andrew Barnert <abarnert at yahoo.com> wrote:
> On Feb 13, 2015, at 3:08, Petr Viktorin <encukou at gmail.com> wrote:
>
>> On Fri, Feb 13, 2015 at 11:02 AM, Andrew Barnert <abarnert at yahoo.com> wrote:
>>> On Feb 13, 2015, at 1:26, Petr Viktorin <encukou at gmail.com> wrote:
>>>
>>>> On Fri, Feb 13, 2015 at 4:52 AM, Andrew Barnert
>>>> <abarnert at yahoo.com.dmarc.invalid> wrote:
>>>>> On Feb 12, 2015, at 18:45, Steven D'Aprano <steve at pearwood.info> wrote:
>>>>>
>>>>>> On Thu, Feb 12, 2015 at 07:25:24PM -0700, Eric Snow wrote:
>>>>>>
>>>>>>> Or just make "dict(a, b, c)" work.
>>>>>>
>>>>>> I've come to the same conclusion.
>>>>
>>>>>> - Surely this change is minor enough that it doesn't need a PEP? It just
>>>>>> needs a patch and approval from a senior developer with commit
>>>>>> privileges.
>>>>>
>>>>> I'm not sure about this. If you want to change MutableMapping.update too, you'll be potentially breaking all kinds of existing classes that claim to be a MutableMapping and override update with a method with a now-incorrect signature.
>>>>
>>>> Is there a mechanism to evolve ABCs? Maybe there should be, something like:
>>>>
>>>> - For a few Python versions, say that the new behavior is preferred
>>>> but can't be relied upon. Update the MutableMapping default
>>>> implementation (this is backwards compatible).
>>>> - Possibly, later, have MutableMapping warn when a class with the old
>>>> update signature is registered. (With the signature improvements in
>>>> py3, this should be possible for 99% cases.) Or just rely on linters
>>>> to implement a check.
>>>> - Finally switch over completely, so callers can rely on the new
>>>> MutableMapping.update
>>>
>>> Well, currently, ABCs don't test signatures at all, just the presence of a non-abstract method. So you'd have to change that first, which seems like a pretty big change.
>>>
>>> And the hard part is the design, not the coding (as, I suspect, with the abc module in the first place). What do you do to mark an @abstractmethod as "check signature against the new version, and, if not, warn if it's compatible with the previous version, raise otherwise"? For that matter, what is the exact rule for "compatible with the new version" given that the new version is essentially just (self, *args, **kwargs)? How do you specify that rule declaratively (or so we add a __check__(subclass_sig) method to abstract methods and make you do it imperatively)? And so on.
>>>
>>> It might be worth seeing what other languages/frameworks do to evolve signatures on their abc/interface/protocol types. I'm not sure there's a good answer to find (unless you want monstrosities like IXMLThingy4::ParseEx3 as in COM), but I wouldn't want to assume that without looking around first.
>>
>> Well, by "mechanism" I meant guidelines on how it should work, not a
>> general implementation.
>> I'd just special-case MutableMapping now, and if it later turns out to
>> be useful elsewhere, figure a way to do it declaratively. I'm always
>> wary of designing a general mechanism from one example.
>
> I guess this means you need a MutableMappingMeta that inherits ABCMeta and does the extra check on update after calling the super checks on abstract methods? Should be pretty simple (and it's not a problem that update isn't abstract), if a bit ugly.

Yes. The ugliness makes me wonder if the warning is worth it.
I expect that popular libraries will change pretty fast after the new
update becomes preferred, and other code will just wait until this
hits them, not until they get a warning or until the old way is
officially dropped from the docs.
(FWIW, I don't see update() signature/mechanics documented on
https://docs.python.org/3/library/collections.abc.html. Is this
elsewhere?)

>> I propose the exact rule for "compatible with the new version" to be
>> "the signature contains a VAR_POSITIONAL argument", with the check
>> being skipped if the signature is unavailable. It's simple, only has
>> false negatives if the signature is wrong, and false positives if
>> varargs do something else (for this case there should be a warning in
>> docs/release notes).
>
> Yeah, if you're special-casing it and coding it imperatively, it's pretty simple.
>
> But shouldn't it also require a var kw param? After all, the signature is supposed to be (self, *iterables_or_mappings, **kwpairs).

That part doesn't change, so I don't see a point in checking it.
If we were doing full signature checking, then yes ? but I think that
best left to linter tools.

From encukou at gmail.com  Fri Feb 13 14:02:49 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Fri, 13 Feb 2015 14:02:49 +0100
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <CADiSq7dBUzo4Yt9DT6B1KNjJ6s4azHw-Xv0ZYXgNATtfo5nK_A@mail.gmail.com>
References: <CAK9R32TcAcaeAWAChTkM5qkw+k7OzkMMg+pt=2LW-w4+vZMs4g@mail.gmail.com>
 <CA+=+wqCoPo8WH9SqOzfqps6Jr66zKs+d_nqXOCujkpha7Ln3_w@mail.gmail.com>
 <CADiSq7dBUzo4Yt9DT6B1KNjJ6s4azHw-Xv0ZYXgNATtfo5nK_A@mail.gmail.com>
Message-ID: <CA+=+wqBGNGL9L6O2oO_NUnLW-+SmVaiyG1SATNxLayhK+w7JtQ@mail.gmail.com>

On Fri, Feb 13, 2015 at 12:59 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On 13 February 2015 at 20:55, Petr Viktorin <encukou at gmail.com> wrote:
>> On Fri, Feb 13, 2015 at 5:43 AM, Martin Teichmann
>>> This becomes even worse if you want to add a metaclass to a class.
>>> Then you have to change every class that inherits from it. Now you
>>> say that code with multiple metaclasses is a bad thing in general,
>>> but I don't agree. There are very simple examples where it makes
>>> a lot of sense. Say, you want to write a class with traits that
>>> implements an abstract base class. Wouldn't it be cool if you could
>>> write
>>
>> This is the one thing keeping metaclasses from being truly general ?
>> using them limits inheritance.
>>
>> The problem is that metaclasses are too powerful.
>
> FWIW, that's why http://legacy.python.org/dev/peps/pep-0422/ exists -
> it aims to standardise Zope's notion of "class initialisers" so you
> can have definition time behaviour that gets inherited by subclasses
> (unlike explicit class decorators), without running the risk of
> introducing metaclass conflicts, and without making the metaclass
> system even more complicated.
>
> Then the cases that can be safely combined with other metaclasses
> would merely stop using a custom metaclass on 3.5+, and instead rely
> entirely on having an appropriate base class.
>
> The last round of reviews (back before 3.4 was released) raised a
> bunch of interesting questions/challenges, and then somebody (*cough*)
> decided it would be a good idea to go spend more time helping with
> PyPI ecosystem wrangling instead :)
>
> Regards,
> Nick.

Oh! I missed that PEP.

I think this is a much better way to go. It defines a simple thing,
rather than providing a way for a powerful thing to say "I'm simple!"
:)

Martin, would this proposal work for your needs?

From __peter__ at web.de  Fri Feb 13 14:20:48 2015
From: __peter__ at web.de (Peter Otten)
Date: Fri, 13 Feb 2015 14:20:48 +0100
Subject: [Python-ideas] Namespace creation with syntax short form
References: <CAOtYm-Hx+9K-w3PzxxOtoQcE9OQOWvg9mYVfc3diBnAWxAkYGA@mail.gmail.com>
 <20150213104159.GQ2498@ando.pearwood.info> <mbkpa6$63l$1@ger.gmane.org>
 <D920348D-B937-4738-9FC8-39C211B1A143@yahoo.com>
Message-ID: <mbktnl$mrl$1@ger.gmane.org>

Andrew Barnert wrote:

> On Feb 13, 2015, at 4:05, Peter Otten
> <__peter__ at web.de> wrote:
> 
>> Steven D'Aprano wrote:
>> 
>>> On Fri, Feb 13, 2015 at 11:19:58AM +0100, Morten Z wrote:
>>>> Namespaces are widely used in Python, but it appear that creating a
>>>> namespace
>>>> has no short form, like [] for list or {} for dict, but requires the
>>>> lengthy types.SimpleNamespace, with prior import of types.
>>> 
>>> from types import SimpleNamespace as NS
>>> ns = NS(answer=42)
>>> 
>>> Python did without a SimpleNamespace type for 20+ years. I don't think
>>> it is important enough to deserve dedicated syntax, especially yet
>>> another overloading of parentheses. Not everything needs to be syntax.
>> 
>> If there were syntactic support for on-the-fly namespace "packing" and
>> "unpacking" that could have a significant impact on coding style, similar
>> to that of named arguments.
>> 
>> Functions often start out returning a tuple and are later modified to
>> return a NamedTuple. For backward compatibility reasons you are either
>> stuck with the original data or you need a workaround like os.stat_result
>> which has more attributes than items.
> 
> There's really nothing terrible about that workaround, except that it's
> significantly harder to implement in Python (with namedtuple) than in C
> (with structseq).
> 
> More importantly, I don't see how named unpacking helps that. If your
> function originally returns a 3-tuple, and people write code that expects
> a 3-tuple, how do you change it to return a 3-tuple that's also a
> 4-namespace under your proposal?
> 
> Meanwhile:
> 
>> "Named (un)packing" could avoid that.
>> 
>>>>> def f():
>> ...    return a:1, b:2, c:3
>>>>> f()
>> (a:1, b:2, c:3)
> 
> So what type is that? Is there a single type for all "named packing
> tuples" (like SimpleNamespace), or a separate type for each one (like
> namedtuple), or some kind of magic that makes issubclass(a, b) true if b
> has the same names as a plus optionally more names at the end, or ...?

I'm an adept of duck typing, so 

a:1, b:2, c:3

has to create an object with three attributes "a", "b", and "c" and

:a, :b, :c = x

works on any x with three attributes "a", "b", and "c". Implementationwise

> some kind of magic that makes issubclass(a, b) true if b
> has the same names as a plus optionally more names at the end, or ...?

would be OK, but as there is no order of the names there is also no "end".
In general this should start as restrictive as possible, e. g. 
issubclass(type(a), type(b)) == a is b wouldn't bother me as far as 
usability is concerned.

> How does this interact with existing unpacking? Can I extract the values
> as a tuple, e.g., "x, *y = f()" to get x=1 and y=(2, 3)? Or the key-value
> pairs with "**kw = f()"? Or maybe ::kw to get them as a namespace instead
> of a dict? Can I mix them, with "x, *y, z:c" to get 1, (2,), and 3 (and
> likewise with **kw or ::ns)?

I'd rather not mix positional and and named (un)packing.

> Does this only work with magic namespaces, or with anything with
> attributes? For example, if I have a Point class, "does x:x = pt" do the
> same thing as "x = pt.x", or does that only work if I don't use a Point
> class and instead pass around (x:3, y:4) with no type information?
> 
> Since the parens are clearly optional, does that mean that "a:3" is a
> single-named namespace object, or do you need the comma as with tuples? (I
> think the former may make parsing ambiguous, but I haven't thought it
> through...)

No comma, please, if at all possible.
 
>>>>> :c, :b, :a = f() # order doesn't matter
>>>>> :a, :c = f()     # we don't care about b
>>>>> a                # by default bound name matches attribute name
> 
> Why do you need, or want, that last comment? With similar features, you
> have to be explicit; e.g., if you want to pass a local variable named a to
> a keyword parameter named a, you have to write a=a, not just =a. How is
> this case different?

I expect the bound name to match that of the attribute in ninety-nine 
percent or so. Other than that -- no difference.
 
>> 1
>>>>> x:a, y:c = f()   # we want a different name
>>>>> x, y
>> (1, 3)



From lkb.teichmann at gmail.com  Fri Feb 13 14:27:23 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Fri, 13 Feb 2015 14:27:23 +0100
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <CA+=+wqBGNGL9L6O2oO_NUnLW-+SmVaiyG1SATNxLayhK+w7JtQ@mail.gmail.com>
References: <CAK9R32TcAcaeAWAChTkM5qkw+k7OzkMMg+pt=2LW-w4+vZMs4g@mail.gmail.com>
 <CA+=+wqCoPo8WH9SqOzfqps6Jr66zKs+d_nqXOCujkpha7Ln3_w@mail.gmail.com>
 <CADiSq7dBUzo4Yt9DT6B1KNjJ6s4azHw-Xv0ZYXgNATtfo5nK_A@mail.gmail.com>
 <CA+=+wqBGNGL9L6O2oO_NUnLW-+SmVaiyG1SATNxLayhK+w7JtQ@mail.gmail.com>
Message-ID: <CAK9R32TO3EDtcB4K6oYp3SPqHbVkyErA9svULLSCuPP5Y2E6eA@mail.gmail.com>

Hi,

me too, I missed that PEP, and yes, it would certainly work
for all my needs.

In short words, the PEP reads: forget all that metaclass stuff,
here comes the real solution. Maybe we can even abandon
metaclasses altogether in Python 4, I think noone would shed
a tear over them.

But if the metaclass fans should win the battle, I would like
my proposal included sometimes.

Greetings

Martin

From steve at pearwood.info  Fri Feb 13 14:38:49 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Sat, 14 Feb 2015 00:38:49 +1100
Subject: [Python-ideas] Namespace creation with syntax short form
In-Reply-To: <mbktnl$mrl$1@ger.gmane.org>
References: <CAOtYm-Hx+9K-w3PzxxOtoQcE9OQOWvg9mYVfc3diBnAWxAkYGA@mail.gmail.com>
 <20150213104159.GQ2498@ando.pearwood.info> <mbkpa6$63l$1@ger.gmane.org>
 <D920348D-B937-4738-9FC8-39C211B1A143@yahoo.com> <mbktnl$mrl$1@ger.gmane.org>
Message-ID: <20150213133849.GR2498@ando.pearwood.info>

On Fri, Feb 13, 2015 at 02:20:48PM +0100, Peter Otten wrote:

> a:1, b:2, c:3
> 
> has to create an object with three attributes "a", "b", and "c" and

This is ambiguous. Is it one object with three attributes, or a tuple of 
three namespace objects, each with one attribute?

 
> :a, :b, :c = x
> 
> works on any x with three attributes "a", "b", and "c". Implementationwise

I don't like the way that the colon jumps around from after the name to 
before the name. I don't like the way attributes are sometimes written 
with a dot and sometimes a colon:

x = a:1, b:2
process(x.a)
process(x:b)  # oops.


I think it is awfully mysterious syntax and beginners are going to have 
a terrible time working out when they should use dot and when they 
should use colon.



-- 
Steven

From __peter__ at web.de  Fri Feb 13 15:00:58 2015
From: __peter__ at web.de (Peter Otten)
Date: Fri, 13 Feb 2015 15:00:58 +0100
Subject: [Python-ideas] Namespace creation with syntax short form
References: <CAOtYm-Hx+9K-w3PzxxOtoQcE9OQOWvg9mYVfc3diBnAWxAkYGA@mail.gmail.com>
 <20150213104159.GQ2498@ando.pearwood.info> <mbkpa6$63l$1@ger.gmane.org>
 <D920348D-B937-4738-9FC8-39C211B1A143@yahoo.com> <mbktnl$mrl$1@ger.gmane.org>
 <20150213133849.GR2498@ando.pearwood.info>
Message-ID: <mbl02r$3p2$1@ger.gmane.org>

Steven D'Aprano wrote:

> On Fri, Feb 13, 2015 at 02:20:48PM +0100, Peter Otten wrote:
> 
>> a:1, b:2, c:3
>> 
>> has to create an object with three attributes "a", "b", and "c" and
> 
> This is ambiguous. Is it one object with three attributes, or a tuple of
> three namespace objects, each with one attribute?
> 
>  
>> :a, :b, :c = x
>> 
>> works on any x with three attributes "a", "b", and "c".
>> Implementationwise
> 
> I don't like the way that the colon jumps around from after the name to
> before the name. I don't like the way attributes are sometimes written
> with a dot and sometimes a colon:
> 
> x = a:1, b:2
> process(x.a)
> process(x:b)  # oops.
> 
> 
> I think it is awfully mysterious syntax and beginners are going to have
> a terrible time working out when they should use dot and when they
> should use colon.

I regret that I let Andrew lure me into a discussion of those details so 
soon. 

The point of my original answer was that if an acceptable syntax can be 
found real code can profit from it.

Personally, I would be OK with limiting this construct to return statements 
and the lhs of an assignment, but I doubt that this will gather the 
necessary support.


From ericsnowcurrently at gmail.com  Fri Feb 13 16:03:22 2015
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Fri, 13 Feb 2015 08:03:22 -0700
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150213062100.GM2498@ando.pearwood.info>
References: <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <88C2AD85-4F6E-4137-9698-E517DC19C3DD@yahoo.com>
 <20150213062100.GM2498@ando.pearwood.info>
Message-ID: <CALFfu7Ax+_TFC1py5E1_jJZG=VPZcASkNuS-Te4LFqoSPGBF4Q@mail.gmail.com>

On Thu, Feb 12, 2015 at 11:21 PM, Steven D'Aprano <steve at pearwood.info> wrote:
> On Thu, Feb 12, 2015 at 07:52:09PM -0800, Andrew Barnert wrote:
>
>> > - Surely this change is minor enough that it doesn't need a PEP? It just
>> >  needs a patch and approval from a senior developer with commit
>> >  privileges.
>>
>> I'm not sure about this. If you want to change MutableMapping.update
>> too, you'll be potentially breaking all kinds of existing classes that
>> claim to be a MutableMapping and override update with a method with a
>> now-incorrect signature.
>
> Fair enough. It's a bigger change than I thought, but I still think it
> is worth doing.

Is there any need to change MutableMapping.update, at least for now?
Why not just focus on the signature of dict() and dict.update()?
dict.update() would continue to be compatible with a
single-positional-arg-signature MutableMapping.update so that
shouldn't be too big a deal.

-eric

From encukou at gmail.com  Fri Feb 13 16:27:12 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Fri, 13 Feb 2015 16:27:12 +0100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CALFfu7Ax+_TFC1py5E1_jJZG=VPZcASkNuS-Te4LFqoSPGBF4Q@mail.gmail.com>
References: <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <88C2AD85-4F6E-4137-9698-E517DC19C3DD@yahoo.com>
 <20150213062100.GM2498@ando.pearwood.info>
 <CALFfu7Ax+_TFC1py5E1_jJZG=VPZcASkNuS-Te4LFqoSPGBF4Q@mail.gmail.com>
Message-ID: <CA+=+wqD7L5h_wBic57j4KQuzxBXqV5CzE+hy6=P+odpyjJDGAw@mail.gmail.com>

On Fri, Feb 13, 2015 at 4:03 PM, Eric Snow <ericsnowcurrently at gmail.com> wrote:
> On Thu, Feb 12, 2015 at 11:21 PM, Steven D'Aprano <steve at pearwood.info> wrote:
>> On Thu, Feb 12, 2015 at 07:52:09PM -0800, Andrew Barnert wrote:
>>
>>> > - Surely this change is minor enough that it doesn't need a PEP? It just
>>> >  needs a patch and approval from a senior developer with commit
>>> >  privileges.
>>>
>>> I'm not sure about this. If you want to change MutableMapping.update
>>> too, you'll be potentially breaking all kinds of existing classes that
>>> claim to be a MutableMapping and override update with a method with a
>>> now-incorrect signature.
>>
>> Fair enough. It's a bigger change than I thought, but I still think it
>> is worth doing.
>
> Is there any need to change MutableMapping.update, at least for now?
> Why not just focus on the signature of dict() and dict.update()?
> dict.update() would continue to be compatible with a
> single-positional-arg-signature MutableMapping.update so that
> shouldn't be too big a deal.

At this point this would just be doc change (actually, a doc
addition): specifying semantics for multiple positional arguments, and
saying that the new signature is preferred.
I'm not proposing to add the warning now.

From barry at python.org  Fri Feb 13 17:26:10 2015
From: barry at python.org (Barry Warsaw)
Date: Fri, 13 Feb 2015 11:26:10 -0500
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org>
 <20150212110929.1cc16d6c@anarchist.wooz.org>
 <mbjkks$cec$1@ger.gmane.org>
Message-ID: <20150213112610.0ddafe47@anarchist.wooz.org>

On Feb 12, 2015, at 08:39 PM, Terry Reedy wrote:

>But I don't like turning code like this (idlelib/Binding.py)
>
>(_'file', [
>    ('_New File', '<<open-new-window>>'),
>    ('_Open...', '<<open-window-from-file>>'),
>    ('Open _Module...', '<<open-module>>'),
>    ('Class _Browser', '<<open-class-browser>>'),
>    ('_Path Browser', '<<open-path-browser>>'),
>    ... and so on for another 59 lines
>
>loaded with tuple parens and _Hotkey markers into the following with () and _
>doing double duty and doubled in number.
>
>  (_('file'), [
>    (_('_New File'), '<<open-new-window>>'),
>    (_('_Open...'), '<<open-window-from-file>>'),
>    (_('Open _Module...'), '<<open-module>>'),
>    (_('Class _Browser'), '<<open-class-browser>>'),
>    (_('_Path Browser'), '<<open-path-browser>>'),
>    ... and so on for another 59 lines

I agree it looks cluttered, but I'm not sure how you could improve things.
You have to be able to mark some strings as translatable, while other strings
are not.  E.g. I never translate log output because that is typically not
consumed by the end user, and if it were me-as-developer trying to parse
Russian log output to find a bug, I wouldn't get very far. :)  Plus, there are
plenty of strings in a typical program that also aren't for consumption by the
end user.

>> Tools/i18n/pygettext.py certainly can.
>
>Are there any tests of this script, especially for 3.x, other than the two
>examples at the end?

Probably not.  I think once GNU xgettext learned how to parse Python, our
script hasn't seen much love.

>I18n of existing code would be easier if Tools/i18n had a stringmark.py
>script that would either mark all strings with _() (or alternative) or
>interactively let one say yes or no to each.

I'm not sure what you're asking for; do you want something you could point at
a Python file and it would rewrite it after interactively asking for
translatability or not?  Hmm, maybe that would be cool.  I look forward to the
contribution. <wink>

Cheers,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150213/0a939966/attachment.sig>

From chris.barker at noaa.gov  Fri Feb 13 17:55:36 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Fri, 13 Feb 2015 08:55:36 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150213065635.GO2498@ando.pearwood.info>
References: <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <CALGmxELbw0BttRAPBJysVASUgHVVHLd1ydMVsiihfF2cm2kfZg@mail.gmail.com>
 <20150213065635.GO2498@ando.pearwood.info>
Message-ID: <CALGmxELUoATTLamXggBn4dk-45U5mrr83qRjKf46d5oNKv=r7Q@mail.gmail.com>

On Thu, Feb 12, 2015 at 10:56 PM, Steven D'Aprano <steve at pearwood.info>
wrote:

> And the difference between:
>
>     default_settings + global_settings + user_settings
>
> versus:
>
>     dict(default_settings, global_settings, user_settings)
>
> is only four characters. If you're using a, b, c as variable names in
> production, you probably have bigger problems than an extra four
> characters :-)


and the difference between:

  a + b

and

  a.sum(b)

is only six characters.

I _think_ your point is that the math symbols are for math, so not
suggesting that we'd all be better off without any infix operators at all.
But that ship has sailed -- Python overloads infix operators. And it does
it for common usage for built-ins.

So adding + and += for dicts fits right in with all this.


Some people here find + to be a plausible operator. I don't. I react to
> using + for things which aren't addition in much the same way that I
> expect you would react if I suggested we used ** as the operator. "Why
> on earth would you want to use ** it's nothing like exponentiation!"
>

merging two dicts certainly is _something_ like addition. I'd agree about
re-using other arbitrary operators just because they exist.


Some languages may be able to optimize a + b + c + d to avoid making and
> throwing away the intermediate dicts, but in general Python cannot.



> So
> it is going to fall into the same trap as list addition does. While it
> is not common for this to lead to severe performance degradation, when
> it does happen the cost is *so severe* that we should think long and
> hard before adding any more O(N**2) booby traps into the language.


well, it seems there are various proposals in this thread to create a
shorthand for "merging two dicts into a new third dict" -- wouldn't those
create the same booby traps?

And teh booby traps aren't in code like:

a + b + c + d -- folks don't generally put 100s of items in a line like
that. The traps are when you do it in a loop:

al_dicts = {}
for d in a_bunch_of_dicts:
    start = start + d

( or sum() ) -- of course, in the above loop, if you had +=, you'd be fine.
So it may be that adding + for mutables is not the same trap, as long as
you add the +=, too. (can't put += in a comprehension, though)


But this:
"Some languages may be able to optimize a + b + c + d "

Got me thinking -- this is actually a really core performance problem for
numpy. In that case, the operators are really being used for math, so you
can't argue that they shouldn't be supported for that reason -- and we
really want readable code:

y = a*x**2 + b*x + c

really reads well, but it does create a lot of temporaries that kill
performance for large arrays. You can optimize that by hand by doing
somethign like:

y = x**2
y *= a
y += b*x
y += c

which really reads poorly!

So I've thought for years that we should have a "numpython" interpreter
that would parse out each expression, check its types, and if they were all
numpy arrays, generate an optimized version that avoided temporaries, maybe
even did nifty things like do the operations in cache-friendly blocks,
multi thread them, whatever. (there is a package called numexp that does
all these things already).


But maybe cPython itself could do an optimization step like that -- examine
an entire expression for types, and if they all support the right
operations, re-structure it in a more optimized way.

Granted, doing this is the general case would be pretty impossible but if
the hooks are there, then individual type-bases optimizations could be done
-- like the current interpreter has for adding strings.

OK -- gotten pretty OT here....

-Chris


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150213/b6e45228/attachment-0001.html>

From antony.lee at berkeley.edu  Fri Feb 13 18:27:56 2015
From: antony.lee at berkeley.edu (Antony Lee)
Date: Fri, 13 Feb 2015 09:27:56 -0800
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <CAK9R32TO3EDtcB4K6oYp3SPqHbVkyErA9svULLSCuPP5Y2E6eA@mail.gmail.com>
References: <CAK9R32TcAcaeAWAChTkM5qkw+k7OzkMMg+pt=2LW-w4+vZMs4g@mail.gmail.com>
 <CA+=+wqCoPo8WH9SqOzfqps6Jr66zKs+d_nqXOCujkpha7Ln3_w@mail.gmail.com>
 <CADiSq7dBUzo4Yt9DT6B1KNjJ6s4azHw-Xv0ZYXgNATtfo5nK_A@mail.gmail.com>
 <CA+=+wqBGNGL9L6O2oO_NUnLW-+SmVaiyG1SATNxLayhK+w7JtQ@mail.gmail.com>
 <CAK9R32TO3EDtcB4K6oYp3SPqHbVkyErA9svULLSCuPP5Y2E6eA@mail.gmail.com>
Message-ID: <CAGRr6BE+DMOyNOUC0DzdyhfP5zb3tk1voKxU9N0qYW=5oNxPwg@mail.gmail.com>

While PEP422 is somewhat useful for heavy users of metaclasses (actually
not necessarily so heavy; as mentioned in the thread you'll hit the problem
as soon as you want to make e.g. a QObject also instance of another
metaclass), I don't like it too much because it feels really ad hoc (adding
yet another layer to the already complex class initialization machinery),
whereas dynamically creating the sub-metaclass (inheriting from all the
required classes) seems to be the "right" solution.  In a sense, using an
explicit and ad hoc mixer ("__add__") would be like saying that every class
should know when it is part of a multiple inheritance chain, whereas the
implicit ("multi_meta") mixer just relies on Python's MRO to do the right
thing.  Of course, best would be that the "multi_meta" step be directly
implemented at the language level, and I agree that having to add it
manually isn't too beautiful.

__init_class__ could then simply be implemented as part of a normal
metaclass, from which you could inherit without worrying about other
metaclasses.

Antony

2015-02-13 5:27 GMT-08:00 Martin Teichmann <lkb.teichmann at gmail.com>:

> Hi,
>
> me too, I missed that PEP, and yes, it would certainly work
> for all my needs.
>
> In short words, the PEP reads: forget all that metaclass stuff,
> here comes the real solution. Maybe we can even abandon
> metaclasses altogether in Python 4, I think noone would shed
> a tear over them.
>
> But if the metaclass fans should win the battle, I would like
> my proposal included sometimes.
>
> Greetings
>
> Martin
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150213/bed6230c/attachment.html>

From guido at python.org  Fri Feb 13 18:35:59 2015
From: guido at python.org (Guido van Rossum)
Date: Fri, 13 Feb 2015 09:35:59 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxEJggLB4steNVPF060Ou-BV6PxhSuA_dbBHOHWivhfe=8Q@mail.gmail.com>
References: <CALGmxELKJ7Lyahk__97+ANf_A+jf_cw3d1X0vt1j9NbyEgNs0Q@mail.gmail.com>
 <CAPJVwBkiU8zbYfTwmoZL80mAZ5guJGcgziZdTOQL8JuA_3s4WA@mail.gmail.com>
 <CADiSq7eokB6tFTAt8J710NPKuvzdgu7Cg4ewg0fgZ4czDxNtpg@mail.gmail.com>
 <20150125130326.GG20517@ando.pearwood.info>
 <CALGmxEJ+GxY8vjJ-ciS00WwhcFNd1xd1A+6XTiZm9aSozjSUow@mail.gmail.com>
 <20150126063932.GI20517@ando.pearwood.info>
 <CACac1F9LRN+hRv066DxTsjhwwFYCx0tvngsO46hBRGkc-=_zug@mail.gmail.com>
 <20150127030800.GQ20517@ando.pearwood.info>
 <CAJS_LCX_KpDsfF1zaYiLaDnP9uYNZrUDysr=6Vh6PRajwR=9hg@mail.gmail.com>
 <CALGmxEJggLB4steNVPF060Ou-BV6PxhSuA_dbBHOHWivhfe=8Q@mail.gmail.com>
Message-ID: <CAP7+vJKHtYkzGaWkNBEmRMgGQR=KLvTWOn14oosGVaJ5bB5+fw@mail.gmail.com>

Is the PEP ready for pronouncement now?

-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150213/951f867b/attachment.html>

From chris.barker at noaa.gov  Fri Feb 13 17:02:56 2015
From: chris.barker at noaa.gov (Chris Barker - NOAA Federal)
Date: Fri, 13 Feb 2015 08:02:56 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150213061904.GL2498@ando.pearwood.info>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info>
Message-ID: <3781943608761054828@unknownmsgid>

> On Feb 12, 2015, at 10:24 PM, Steven D'Aprano <steve at pearwood.info> wrote:
>> += duplicates the extend method on lists.
>
> Yes it does, and that sometimes causes confusion when people wonder why
> alist += blist is not *quite* the same as alist = alist + blist.

Actually, that's the primary motivator for += and friends -- to
support in-place operations on mutables. Notably numpy arrays.

> It also
> leads to a quite ugly and unfortunate language wart with tuples:
>
> py> t = ([], None)
> py> t[0] += [1]
> Traceback (most recent call last):
>  File "<stdin>", line 1, in <module>
> TypeError: 'tuple' object does not support item assignment
> py> t
> ([1], None)
>
> Try explaining to novices why this is not a bug.

I'm going to have to think about that a fair bit myself -- so yes,
really confusing.

But the "problem" here is that augmented assignment shouldn't work on
immutables at all.

But then we wouldn't have the too appealing to resist syntactic sugar
for integer incrementing.

But are you saying that augmented assignment was simply a mistake
altogether, and therefore no new use of it should be added at all (
and you'd deprecate it if you could)?

In which case, there's nothing to discuss about this case.

-Chris

From brett at python.org  Fri Feb 13 18:38:02 2015
From: brett at python.org (Brett Cannon)
Date: Fri, 13 Feb 2015 17:38:02 +0000
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
Message-ID: <CAP1=2W6DpO0_TEcbFomSuzm8xG+Cx7s3su6KUREsd6W+RBFJmQ@mail.gmail.com>

On Wed Feb 11 2015 at 2:21:59 AM Ian Lee <ianlee1521 at gmail.com> wrote:

> I mentioned this on the python-dev list [1] originally as a +1 to someone
> else suggesting the idea [2]. It also came up in a response to my post that
> I can't seem to find in the archives, so I've quoted it below [3].
>
> As the subject says, the idea would be to add a "+" and "+=" operator to
> dict that would provide the following behavior:
>
> >>> {'x': 1, 'y': 2} + {'z': 3}
> {'x': 1, 'y': 2, 'z': 3}
>
> With the only potentially non obvious case I can see then is when there
> are duplicate keys, in which case the syntax could just be defined that
> last setter wins, e.g.:
>
> >>> {'x': 1, 'y': 2} + {'x': 3}
> {'x': 3, 'y': 2}
>
> Which is analogous to the example:
>
> >>> new_dict = dict1.copy()
> >>> new_dict.update(dict2)
>
> With "+=" then essentially ending up being an alias for
> ``dict.update(...)``.
>
> I'd be happy to champion this as a PEP if the feedback / public opinion
> heads in that direction.
>

Unless Guido wants to speak up and give some historical context, I think a
PEP that finally settled this idea would be good, even if it just ends up
being a historical document as to why dicts don't have an __add__ method.
Obviously there is a good amount of support both for and against the idea.

-Brett


>
>
> [1] https://mail.python.org/pipermail/python-dev/2015-February/138150.html
> [2] https://mail.python.org/pipermail/python-dev/2015-February/138116.html
> [3] John Wong --
>
>> Well looking at just list
>> a + b yields new list
>> a += b yields modified a
>> then there is also .extend in list. etc.
>> so do we want to follow list's footstep? I like + because + is more
>> natural to read. Maybe this needs to be a separate thread. I am actually
>> amazed to remember dict + dict is not possible... there must be a reason
>> (performance??) for this...
>
>
> Cheers,
>
> ~ Ian Lee
>  _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150213/d1a409db/attachment-0001.html>

From chris.barker at noaa.gov  Fri Feb 13 19:10:04 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Fri, 13 Feb 2015 10:10:04 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CAP7+vJKHtYkzGaWkNBEmRMgGQR=KLvTWOn14oosGVaJ5bB5+fw@mail.gmail.com>
References: <CALGmxELKJ7Lyahk__97+ANf_A+jf_cw3d1X0vt1j9NbyEgNs0Q@mail.gmail.com>
 <CAPJVwBkiU8zbYfTwmoZL80mAZ5guJGcgziZdTOQL8JuA_3s4WA@mail.gmail.com>
 <CADiSq7eokB6tFTAt8J710NPKuvzdgu7Cg4ewg0fgZ4czDxNtpg@mail.gmail.com>
 <20150125130326.GG20517@ando.pearwood.info>
 <CALGmxEJ+GxY8vjJ-ciS00WwhcFNd1xd1A+6XTiZm9aSozjSUow@mail.gmail.com>
 <20150126063932.GI20517@ando.pearwood.info>
 <CACac1F9LRN+hRv066DxTsjhwwFYCx0tvngsO46hBRGkc-=_zug@mail.gmail.com>
 <20150127030800.GQ20517@ando.pearwood.info>
 <CAJS_LCX_KpDsfF1zaYiLaDnP9uYNZrUDysr=6Vh6PRajwR=9hg@mail.gmail.com>
 <CALGmxEJggLB4steNVPF060Ou-BV6PxhSuA_dbBHOHWivhfe=8Q@mail.gmail.com>
 <CAP7+vJKHtYkzGaWkNBEmRMgGQR=KLvTWOn14oosGVaJ5bB5+fw@mail.gmail.com>
Message-ID: <CALGmxE+eeSoEqfGrfjwPdXtdciXXOTtibs6VnoXQdxAPmnBGDw@mail.gmail.com>

On Fri, Feb 13, 2015 at 9:35 AM, Guido van Rossum <guido at python.org> wrote:

> Is the PEP ready for pronouncement now?
>

I think so -- I've addressed (one way or another) everything brought up
here.

The proposed implementation needs a bit more work, but that's, well,
implementation. ;-)

-Chris



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150213/6221968f/attachment.html>

From ericsnowcurrently at gmail.com  Fri Feb 13 19:18:22 2015
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Fri, 13 Feb 2015 11:18:22 -0700
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CALGmxELUoATTLamXggBn4dk-45U5mrr83qRjKf46d5oNKv=r7Q@mail.gmail.com>
References: <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <CALGmxELbw0BttRAPBJysVASUgHVVHLd1ydMVsiihfF2cm2kfZg@mail.gmail.com>
 <20150213065635.GO2498@ando.pearwood.info>
 <CALGmxELUoATTLamXggBn4dk-45U5mrr83qRjKf46d5oNKv=r7Q@mail.gmail.com>
Message-ID: <CALFfu7DSX6G7Racn99V+uiUNc0e49+ssogNBy6ZxvJL2Z_4j6w@mail.gmail.com>

On Fri, Feb 13, 2015 at 9:55 AM, Chris Barker <chris.barker at noaa.gov> wrote:
> But this:
> "Some languages may be able to optimize a + b + c + d "
>
> Got me thinking -- this is actually a really core performance problem for
> numpy. In that case, the operators are really being used for math, so you
> can't argue that they shouldn't be supported for that reason -- and we
> really want readable code:
>
> y = a*x**2 + b*x + c
>
> really reads well, but it does create a lot of temporaries that kill
> performance for large arrays. You can optimize that by hand by doing
> somethign like:
>
> y = x**2
> y *= a
> y += b*x
> y += c
>
> which really reads poorly!
>
> So I've thought for years that we should have a "numpython" interpreter that
> would parse out each expression, check its types, and if they were all numpy
> arrays, generate an optimized version that avoided temporaries, maybe even
> did nifty things like do the operations in cache-friendly blocks, multi
> thread them, whatever. (there is a package called numexp that does all these
> things already).

This should be pretty do-able with an import hook (or similarly a REPL
hook) without the need to roll your own interpreter.  There is already
prior art. [1][2][3][4]  I've been meaning for a while to write a
library to make this easier and mitigate the gotchas.  One of the
trickiest is that it can skew tracebacks (a la macros or compiler
optimizations in GDB).

>
>
> But maybe cPython itself could do an optimization step like that -- examine
> an entire expression for types, and if they all support the right
> operations, re-structure it in a more optimized way.

It's tricky in a dynamic language like Python to do a lot of
optimization at compile time, particularly in the case in the face of
operator overloading.  The problem is that there is no guarantee of
some name's type (other than `object`) nor of the behavior of the
types methods (including operators).  So optimizations like the one
you are suggesting make assumptions that cannot be checked until
run-time (to ensure the optimization is applied safely).

The stinky part is that the vast majority of the time, perhaps even
always, your optimization could be applied at compile-time.  But
because Python is so flexible, the the possibility is always there
that someone did something that breaks your assumptions.  So run-time
optimization is your only recourse.

To some extent PyPy is state-of-the-art when in comes to
optimizations, so it may be worth taking a look if the problem is
tractable for Python in general.  PEP 484 ("Type Hints") may help in a
number of ways too.  As for CPython, it has a peephole optimizer for
compile-time, and there has been some effort to optimize at the AST
level.  However, as already noted you would need the optimization to
happen at run-time.

The only solution that comes to my mind is that the compiler (and/or
optimizers) could leave a hint (emit an extra byte code, etc.) that
subsequent code is likely to be able to have some optimization
applied.  Then the interpreter could do whatever check it needs to
make sure and then apply the optimized code.  Alternately the compiler
could generate an explicit branch for the potential optimization (so
that the interpreter wouldn't need to deal with it).  I suppose there
are a number of possibilities along that line, but I'm not an expert
on the compiler and interpreter.

>
> Granted, doing this is the general case would be pretty impossible but if
> the hooks are there, then individual type-bases optimizations could be done
> -- like the current interpreter has for adding strings.

Hmm. That is another approach too.  You pay a cost for not baking it
into the compiler/interpreter, but you also get more flexibility and
portability.

-eric

[1] Peter Wang did a lightning talk at PyCon (in Santa Clara).
[2] I believe Numba uses an import hook to do its thing.
[3] Paul Tagliamonte's hy: https://github.com/hylang/hy
[4] Li Haoyi's macropy: https://github.com/lihaoyi/macropy

From ericsnowcurrently at gmail.com  Fri Feb 13 19:19:42 2015
From: ericsnowcurrently at gmail.com (Eric Snow)
Date: Fri, 13 Feb 2015 11:19:42 -0700
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAP1=2W6DpO0_TEcbFomSuzm8xG+Cx7s3su6KUREsd6W+RBFJmQ@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <CAP1=2W6DpO0_TEcbFomSuzm8xG+Cx7s3su6KUREsd6W+RBFJmQ@mail.gmail.com>
Message-ID: <CALFfu7C7MzCpzyd2qtQ0qvkG+jt4N9e_SYKUiezEgcufgGvrwA@mail.gmail.com>

On Fri, Feb 13, 2015 at 10:38 AM, Brett Cannon <brett at python.org> wrote:
> Unless Guido wants to speak up and give some historical context, I think a
> PEP that finally settled this idea would be good, even if it just ends up
> being a historical document as to why dicts don't have an __add__ method.
> Obviously there is a good amount of support both for and against the idea.

+1

-eric

From thomas at kluyver.me.uk  Fri Feb 13 19:27:15 2015
From: thomas at kluyver.me.uk (Thomas Kluyver)
Date: Fri, 13 Feb 2015 10:27:15 -0800
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <CAK9R32TcAcaeAWAChTkM5qkw+k7OzkMMg+pt=2LW-w4+vZMs4g@mail.gmail.com>
References: <CAK9R32TcAcaeAWAChTkM5qkw+k7OzkMMg+pt=2LW-w4+vZMs4g@mail.gmail.com>
Message-ID: <CAOvn4qisLrgKLRLqw6pDuBRuEqVmiVx82XhQTm=c-N-MvPXtvQ@mail.gmail.com>

On 12 February 2015 at 20:43, Martin Teichmann <lkb.teichmann at gmail.com>
wrote:

> class ConsoleWidget(MetaQObjectHasTraits('NewBase',
> (LoggingConfigurable, QtGui.QWidget), {})):
>
> which, in my opinion, is hard to read and grasp. Everyone has to
> be taught on how to use those metaclass mixers.
>

Yes, it's hard to read, but it indicates that there's something complex
going on. Your proposal is cleaner, but to my mind it's just sweeping under
the carpet the fact that multiple metaclasses are in use. To fully
understand the code, you would still need to get a headache understanding
how multiple metaclasses interact, but you would first have to notice that
one metaclass is able to combine itself with another, and that there is
another one to combine with.


> I'm completely with you arguing that inheritance is overrated and
> composition should be used more often. Yet, I actually think that
> the above example is actually not such a bad idea, why should a
> QWidget not be LoggingConfigurable, and have traits?
>

It's not in itself a ridiculous idea, but it's bad practice precisely
because multiple inheritance is tricky. If inheritance means 'is a', it's
hard to be two completely different kinds of thing at the same time -
especially when both of those things rely on advanced features of Python.
If I was writing that code from scratch, I would try very hard to make it
such that Configurable objects *have* QObjects, rather than being QObjects.

Moreover, I don't think you can overcome this difficulty by adding any new
features to inheritance or metaclasses in Python. I doubt you could
overcome it even if you could redesign multiple inheritance with a blank
slate, but perhaps that is possible.

Given that I think the intersection of multiple inheritance and metaclasses
will always be horrible to understand, I think anything that makes it
easier to get into that situation is a bad idea. I'd support something that
makes it easier to understand such situations when they do exist, but I
think your proposal would make it harder, not easier.

Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150213/487d1e30/attachment-0001.html>

From python at 2sn.net  Fri Feb 13 21:47:17 2015
From: python at 2sn.net (Alexander Heger)
Date: Sat, 14 Feb 2015 07:47:17 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150213014805.GH2498@ando.pearwood.info>
References: <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
 <CACac1F_9v9tF-saVD_MHsrGfWRx6fyxxc+o2tff5HXRob5hxKQ@mail.gmail.com>
 <CAN3CYHxsSxuUX+WXkc3jtdSugEFEFm_H39mgF73ddF-U9Ktexw@mail.gmail.com>
 <20150213014805.GH2498@ando.pearwood.info>
Message-ID: <CAN3CYHxm2xvNJ=SnE-9G2uX5GxQfe-Bdp3tvk_iab1SgkRj-Ng@mail.gmail.com>

>> I had started a previous thread on this, asking why there was no
>> addition defined for dictionaries, along the line of what Peter argued
>> a novice would naively expect.
> [...]
> I don't think that "novices would naively expect this" is correct, and
> even if it were, I don't think the language should be designed around
> what novices naively expect.

well, I had at least hoped this to work while not being a novice.
Because it would be nice.

> As far as I was, and still am, concerned, & is the obvious and most
> natural operator for concatenation. [1, 2]+[3, 4] should return [4, 6],
> and sum(bunch of lists) should be a meaningless operation, like
> sum(bunch of HTTP servers). Or sum(bunch of dicts).

This only works if the items *can* be added.  Dics should not make
such an assumption. & is not a more natural operator, because, why
would you then not just expect that [1, 2] & [3, 4] returns [1 & 2, 3
& 4] == [0 , 0] ?  the same would be true for any operator you pick.

You just define that combination of dics, + or whatever operator is
used, combines dicts using the update operation.  This is how dicts
behave.

> No, that's not the argument. The argument is not that "we used the +
> operator to merge two dicts, so merging should + the values". The
> argument is that in the event of a duplicate key, adding the values
> is sometimes a common and useful thing to do.

... and sometimes it is not; in general it will fail.  Replacing always works.

-Alexander

From python at 2sn.net  Fri Feb 13 22:11:01 2015
From: python at 2sn.net (Alexander Heger)
Date: Sat, 14 Feb 2015 08:11:01 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150213065635.GO2498@ando.pearwood.info>
References: <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <CALGmxELbw0BttRAPBJysVASUgHVVHLd1ydMVsiihfF2cm2kfZg@mail.gmail.com>
 <20150213065635.GO2498@ando.pearwood.info>
Message-ID: <CAN3CYHwXz_NCrSt1C0e7LOJUvBtepiBK8yyLcK=UG2p+ECj7bw@mail.gmail.com>

> Some languages may be able to optimize a + b + c + d to avoid making and
> throwing away the intermediate dicts, but in general Python cannot. So
> it is going to fall into the same trap as list addition does. While it
> is not common for this to lead to severe performance degradation, when
> it does happen the cost is *so severe* that we should think long and
> hard before adding any more O(N**2) booby traps into the language.

you can still have both, make the + operator work for simple cases and
warn people to use the function interface for more time-critical
complicated cases.

And sometimes you may want have certain behavior, e.g., a + ((b + c) +
d), especially if you did overwrite __add__ for a custom behaviour
(using default update on standard dicts this would be the same result,
I think).

-Alexander

From guido at python.org  Fri Feb 13 22:55:37 2015
From: guido at python.org (Guido van Rossum)
Date: Fri, 13 Feb 2015 13:55:37 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxE+eeSoEqfGrfjwPdXtdciXXOTtibs6VnoXQdxAPmnBGDw@mail.gmail.com>
References: <CALGmxELKJ7Lyahk__97+ANf_A+jf_cw3d1X0vt1j9NbyEgNs0Q@mail.gmail.com>
 <CAPJVwBkiU8zbYfTwmoZL80mAZ5guJGcgziZdTOQL8JuA_3s4WA@mail.gmail.com>
 <CADiSq7eokB6tFTAt8J710NPKuvzdgu7Cg4ewg0fgZ4czDxNtpg@mail.gmail.com>
 <20150125130326.GG20517@ando.pearwood.info>
 <CALGmxEJ+GxY8vjJ-ciS00WwhcFNd1xd1A+6XTiZm9aSozjSUow@mail.gmail.com>
 <20150126063932.GI20517@ando.pearwood.info>
 <CACac1F9LRN+hRv066DxTsjhwwFYCx0tvngsO46hBRGkc-=_zug@mail.gmail.com>
 <20150127030800.GQ20517@ando.pearwood.info>
 <CAJS_LCX_KpDsfF1zaYiLaDnP9uYNZrUDysr=6Vh6PRajwR=9hg@mail.gmail.com>
 <CALGmxEJggLB4steNVPF060Ou-BV6PxhSuA_dbBHOHWivhfe=8Q@mail.gmail.com>
 <CAP7+vJKHtYkzGaWkNBEmRMgGQR=KLvTWOn14oosGVaJ5bB5+fw@mail.gmail.com>
 <CALGmxE+eeSoEqfGrfjwPdXtdciXXOTtibs6VnoXQdxAPmnBGDw@mail.gmail.com>
Message-ID: <CAP7+vJK0=DgruPpx6eJGKnB4fQOhmkJUN3Mt-egRR+Z=WCA8aA@mail.gmail.com>

I see no roadblocks but have run out of time to review the PEP one more
time. I'm going on vacation for a week or so, maybe I'll find time, if not
I'll start reviewing this around Feb 23.

On Fri, Feb 13, 2015 at 10:10 AM, Chris Barker <chris.barker at noaa.gov>
wrote:

> On Fri, Feb 13, 2015 at 9:35 AM, Guido van Rossum <guido at python.org>
> wrote:
>
>> Is the PEP ready for pronouncement now?
>>
>
> I think so -- I've addressed (one way or another) everything brought up
> here.
>
> The proposed implementation needs a bit more work, but that's, well,
> implementation. ;-)
>
> -Chris
>
>
>
> --
>
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R            (206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115       (206) 526-6317   main reception
>
> Chris.Barker at noaa.gov
>



-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150213/2f8ca523/attachment.html>

From greg.ewing at canterbury.ac.nz  Fri Feb 13 23:12:10 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 14 Feb 2015 11:12:10 +1300
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <mbkilf$mie$1@ger.gmane.org>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <54DD1A0B.9000202@canterbury.ac.nz> <mbkilf$mie$1@ger.gmane.org>
Message-ID: <54DE76BA.6060607@canterbury.ac.nz>

Stefan Behnel wrote:
> Arithmetic expressions are always
> evaluated from left to right. It thus seems obvious to me what gets created
> first, and what gets added afterwards (and overwrites what's there).

I'm okay with "added afterwards", but the "overwrites"
part is *not* obvious to me.

Suppose you're given two shopping lists with instructions
to get everything that's on either list, but not double
up on anything. You don't have a spare piece of paper
to make a merged list, so you do it this way: Go through
the first list and get everything on it, then go
through the second list and get everything on that.

Now one list contains "Bread (preferably white)" and
the other contains "Bread (preferably wholemeal)".
You haven't been told what to do in case of a conflict,
it's left to your judgement.

What do you when you encounter the second entry for
bread? Do you put the one you have back and grab the
other one, or do you just think "I've already got
bread" and move on? Which is more obvious?

-- 
Greg

From greg.ewing at canterbury.ac.nz  Fri Feb 13 23:17:58 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 14 Feb 2015 11:17:58 +1300
Subject: [Python-ideas] Namespace creation with syntax short form
In-Reply-To: <CAOtYm-Hx+9K-w3PzxxOtoQcE9OQOWvg9mYVfc3diBnAWxAkYGA@mail.gmail.com>
References: <CAOtYm-Hx+9K-w3PzxxOtoQcE9OQOWvg9mYVfc3diBnAWxAkYGA@mail.gmail.com>
Message-ID: <54DE7816.50202@canterbury.ac.nz>

Morten Z wrote:
> Namespaces are widely used in Python, but it appear that creating a 
> namespace
> has no short form,

Most of the time people create dedicated classes for their
namespaces, though. It's very rare to see the use of an
ad-hoc namespace such as SimpleNamespace provides.

It seems to me like a feature that novice programmers
think they want, until they learn that there are better
ways of going about it.

-- 
Greg

From abarnert at yahoo.com  Fri Feb 13 23:16:57 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Fri, 13 Feb 2015 14:16:57 -0800
Subject: [Python-ideas] Namespace creation with syntax short form
In-Reply-To: <mbktnl$mrl$1@ger.gmane.org>
References: <CAOtYm-Hx+9K-w3PzxxOtoQcE9OQOWvg9mYVfc3diBnAWxAkYGA@mail.gmail.com>
 <20150213104159.GQ2498@ando.pearwood.info> <mbkpa6$63l$1@ger.gmane.org>
 <D920348D-B937-4738-9FC8-39C211B1A143@yahoo.com> <mbktnl$mrl$1@ger.gmane.org>
Message-ID: <0F99BA73-4206-4134-A2C2-E10A9A6B6777@yahoo.com>

On Feb 13, 2015, at 5:20, Peter Otten <__peter__ at web.de> wrote:

> Andrew Barnert wrote:
> 
>> On Feb 13, 2015, at 4:05, Peter Otten
>> <__peter__ at web.de> wrote:
>> 
>>> Steven D'Aprano wrote:
>>> 
>>>> On Fri, Feb 13, 2015 at 11:19:58AM +0100, Morten Z wrote:
>>>>> Namespaces are widely used in Python, but it appear that creating a
>>>>> namespace
>>>>> has no short form, like [] for list or {} for dict, but requires the
>>>>> lengthy types.SimpleNamespace, with prior import of types.
>>>> 
>>>> from types import SimpleNamespace as NS
>>>> ns = NS(answer=42)
>>>> 
>>>> Python did without a SimpleNamespace type for 20+ years. I don't think
>>>> it is important enough to deserve dedicated syntax, especially yet
>>>> another overloading of parentheses. Not everything needs to be syntax.
>>> 
>>> If there were syntactic support for on-the-fly namespace "packing" and
>>> "unpacking" that could have a significant impact on coding style, similar
>>> to that of named arguments.
>>> 
>>> Functions often start out returning a tuple and are later modified to
>>> return a NamedTuple. For backward compatibility reasons you are either
>>> stuck with the original data or you need a workaround like os.stat_result
>>> which has more attributes than items.
>> 
>> There's really nothing terrible about that workaround, except that it's
>> significantly harder to implement in Python (with namedtuple) than in C
>> (with structseq).
>> 
>> More importantly, I don't see how named unpacking helps that. If your
>> function originally returns a 3-tuple, and people write code that expects
>> a 3-tuple, how do you change it to return a 3-tuple that's also a
>> 4-namespace under your proposal?
>> 
>> Meanwhile:
>> 
>>> "Named (un)packing" could avoid that.
>>> 
>>>>>> def f():
>>> ...    return a:1, b:2, c:3
>>>>>> f()
>>> (a:1, b:2, c:3)
>> 
>> So what type is that? Is there a single type for all "named packing
>> tuples" (like SimpleNamespace), or a separate type for each one (like
>> namedtuple), or some kind of magic that makes issubclass(a, b) true if b
>> has the same names as a plus optionally more names at the end, or ...?
> 
> I'm an adept of duck typing, so 
> 
> a:1, b:2, c:3
> 
> has to create an object with three attributes "a", "b", and "c" and
> 
> :a, :b, :c = x
> 
> works on any x with three attributes "a", "b", and "c".

That doesn't answer either of my questions. Again:

How does this help when you started with a 3-tuple and realize you need a 4th value, ala stat? Since this is your motivating example, if the proposal doesn't help that example, what's the point?

What type is this? Just saying "I'm an adept of duck typing" doesn't answer the question. Python objects have types, and this is important.

> Implementationwise
> 
>> some kind of magic that makes issubclass(a, b) true if b
>> has the same names as a plus optionally more names at the end, or ...?
> 
> would be OK, but as there is no order of the names there is also no "end".

OK, that makes this seem even _less_ useful for the stat case.

The reasons to use namedtuple/structseq in the first place for stat are (a) so existing code written to the original tuple interface can continue to treat the value as a tuple while newer code can access the new attributes that don't fit; and/or (b) because the order has some intrinsic/historical/whatever meaning so it sometimes makes sense to extract it by position. You're proposing something as an improvement for that case, but it fails both ways.

> In general this should start as restrictive as possible, e. g. 
> issubclass(type(a), type(b)) == a is b wouldn't bother me as far as 
> usability is concerned.

OK, so a new type for each namespace object?

>> How does this interact with existing unpacking? Can I extract the values
>> as a tuple, e.g., "x, *y = f()" to get x=1 and y=(2, 3)? Or the key-value
>> pairs with "**kw = f()"? Or maybe ::kw to get them as a namespace instead
>> of a dict? Can I mix them, with "x, *y, z:c" to get 1, (2,), and 3 (and
>> likewise with **kw or ::ns)?
> 
> I'd rather not mix positional and and named (un)packing.
> 
>> Does this only work with magic namespaces, or with anything with
>> attributes? For example, if I have a Point class, "does x:x = pt" do the
>> same thing as "x = pt.x", or does that only work if I don't use a Point
>> class and instead pass around (x:3, y:4) with no type information?

This is a really important question, and you've skipped it.

I think you're tying unpacking and packing together too closely in your head--like people who expect that the **kw object you pass as an argument to a function call must end up as the **kw parameter value, or that parameters with default values have anything to do with keyword arguments. In your case, assignment targets and expressions are partially parallel syntax and similar semantics, but the distinction is important.

For example, "a, b = 2, 3" does not create a tuple "(a, b)" to assign to. And you can of course do "a[0], b.c = x" and expect that to work. So, you have to consider how your unpacking fits into that. Look at the grammar for assignment targets, and work out where the colons will go. And then, what are the semantics? If you don't know what they should be, that's why it's important to be able to answer questions like "can I use namespace unpacking for any object with the right attributes, or only for magic namespace objects?" If the answer is the former, the semantics can be dead simple: namespace unpacking uses getattr. If not, you need to invent some new mechanism, and it needs to make sense. For that matter, that question is important in its own right. With the former answer, namespace unpacking is perfectly useful on its own--e.g., to extract a couple of attributes out of a stat result. So you can argue for two related but separate proposals on their own merits, giving you a wider range of use cases both to argue for each proposal itself better and to think through the details.

And likewise, the thing in the return statement is a kind of expression--in particular, a display expression for some type. That means it has a type and a value that you can ask questions about.

I know you said "I regret that I let Andrew lure me into a discussion of those details so soon. The point of my original answer was that if an acceptable syntax can be 
found real code can profit from it." But your entire proposal is about syntactic sugar, and if you don't actually have an acceptable syntax and aren't willing to answer the questions to work toward one, you don't have a proposal.

>> Since the parens are clearly optional, does that mean that "a:3" is a
>> single-named namespace object, or do you need the comma as with tuples? (I
>> think the former may make parsing ambiguous, but I haven't thought it
>> through...)
> 
> No comma, please, if at all possible.

As pointed out, that immediately makes a:1, b:2 ambiguous as a single 2-namespace or a 2-tuple of 1-namespaces. How do you resolve that? And again, I'm pretty sure that's not the only place this would be ambiguous. (It may be worth looking at how YAML deals with its pretty similar ambiguities, even though it obviously has a different grammar from Python expressions.)

>>>>>> :c, :b, :a = f() # order doesn't matter
>>>>>> :a, :c = f()     # we don't care about b
>>>>>> a                # by default bound name matches attribute name
>> 
>> Why do you need, or want, that last comment? With similar features, you
>> have to be explicit; e.g., if you want to pass a local variable named a to
>> a keyword parameter named a, you have to write a=a, not just =a. How is
>> this case different?
> 
> I expect the bound name to match that of the attribute in ninety-nine 
> percent or so. Other than that -- no difference.

First, I don't that expectation is true. Look at iterable unpacking. It's certainly _common_ that you return x, y from a function and the result gets assigned to locals named x and y, but it's nowhere near 99%. This is where having real use cases--and being able to answer questions about, e.g., whether you can unpack any object so--would be handy. I'm pretty sure that if I wanted to get the times out of a stat(infile), I'd want the locals to be named something like inatime, inmtime, inctime, not st_atime, st_mtime, and st_ctime.

(Sorry for harping on stat, but since it's the one real-life use case you've come up with that isn't a toy example, it's the most obvious thing to look at. If you think there's something special about stat that makes it not relevant in general, please explain what it is and give a better use case.)

>>> 1
>>>>>> x:a, y:c = f()   # we want a different name
>>>>>> x, y
>>> (1, 3)
> 
> 
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From abarnert at yahoo.com  Fri Feb 13 23:24:33 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Fri, 13 Feb 2015 14:24:33 -0800
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <CAK9R32TO3EDtcB4K6oYp3SPqHbVkyErA9svULLSCuPP5Y2E6eA@mail.gmail.com>
References: <CAK9R32TcAcaeAWAChTkM5qkw+k7OzkMMg+pt=2LW-w4+vZMs4g@mail.gmail.com>
 <CA+=+wqCoPo8WH9SqOzfqps6Jr66zKs+d_nqXOCujkpha7Ln3_w@mail.gmail.com>
 <CADiSq7dBUzo4Yt9DT6B1KNjJ6s4azHw-Xv0ZYXgNATtfo5nK_A@mail.gmail.com>
 <CA+=+wqBGNGL9L6O2oO_NUnLW-+SmVaiyG1SATNxLayhK+w7JtQ@mail.gmail.com>
 <CAK9R32TO3EDtcB4K6oYp3SPqHbVkyErA9svULLSCuPP5Y2E6eA@mail.gmail.com>
Message-ID: <C8E45D07-203F-4A71-B2F4-F2902452E4F1@yahoo.com>

On Feb 13, 2015, at 5:27, Martin Teichmann <lkb.teichmann at gmail.com> wrote:

> Hi,
> 
> me too, I missed that PEP, and yes, it would certainly work
> for all my needs.
> 
> In short words, the PEP reads: forget all that metaclass stuff,
> here comes the real solution. Maybe we can even abandon
> metaclasses altogether in Python 4, I think noone would shed
> a tear over them.
> 
> But if the metaclass fans should win the battle, I would like
> my proposal included sometimes.

That sounds reasonable in principle. Even if that PEP got accepted for Python 3.5, there would still be an awful lot of now-unnecessary metaclasses out there for a long time. For example, PyQt isn't going to drop 3.4 support, and, even if it did, it wouldn't be rewritten overnight. All it does is let you write a 3.5 application that works more easily with PyQt by not having to write your own metaclasses and toss them into the mix. If you still need or want to use metaclasses, you still need to mix them somehow.

But I think it also gives weight to Barry's point: the problem is that both you and PyQt are using something that's much more powerful and flexible than needed, and that forced readers of your code to think through how your metaclasses interact before they can understand your classes, and trying to sweep that under the rug may not really help beyond the "shut up and code" perspective that gets you the first 80% of the way before you have to debug something confusing and suddenly you have to figure out something very complicated when nearing a deadline...

From rob.cliffe at btinternet.com  Fri Feb 13 23:28:47 2015
From: rob.cliffe at btinternet.com (Rob Cliffe)
Date: Fri, 13 Feb 2015 22:28:47 +0000
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <87iof6mnjj.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info> <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
 <87iof6mnjj.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <54DE7A9F.6040704@btinternet.com>


On 13/02/2015 04:40, Stephen J. Turnbull wrote:
> Donald Stufft writes:
>
>   > Python often gets little improvements that on their own are not
>   > major enhancements but when looked at cumulatively they add up to a
>   > nicer to use language. An example would be set literals, set(["a",
>   > "b"]) wasn't confusing nor was it particularly hard to use, however
>   > being able to type {"a", "b"} is nice, slightly easier, and just
>   > makes the language jsut a little bit better.
>
> Yes, yes, and yes.
>
>   > Similarly doing:
>   >
>   >     new_dict = dict1.copy()
>   >     new_dict.update(dict2)
>   >
>   > Isn't confusing or particularly hard to use, however being able to
>   > type that as new_dict = dict1 + dict2 is more succinct, cleaner,
>   > and just a little bit nicer.
>
> Yes, no, and no.
>
> Succint?  Yes.  Just count characters or lines (but both are
> deprecated practices in juding Python style).
>
> Cleaner?  No.  The syntax is cleaner (way fewer explicit operations),
> but the semantics are muddier for me.  At one time or another, at
> least four different interpretations of "dict addition" have been
> proposed already:
There are many features of the languages where decisions have had to be 
made one way or another.  Once the decision is made, it's unfair to say 
the semantics are muddy because they could have been different.
Rob Cliffe

From chris.barker at noaa.gov  Sat Feb 14 00:12:29 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Fri, 13 Feb 2015 15:12:29 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CALFfu7DSX6G7Racn99V+uiUNc0e49+ssogNBy6ZxvJL2Z_4j6w@mail.gmail.com>
References: <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <CALGmxELbw0BttRAPBJysVASUgHVVHLd1ydMVsiihfF2cm2kfZg@mail.gmail.com>
 <20150213065635.GO2498@ando.pearwood.info>
 <CALGmxELUoATTLamXggBn4dk-45U5mrr83qRjKf46d5oNKv=r7Q@mail.gmail.com>
 <CALFfu7DSX6G7Racn99V+uiUNc0e49+ssogNBy6ZxvJL2Z_4j6w@mail.gmail.com>
Message-ID: <CALGmxEKSB0Q6qrNo7=x5ngd_vB+JC811uoaoCPKzE-7ajbssuw@mail.gmail.com>

On Fri, Feb 13, 2015 at 10:18 AM, Eric Snow <ericsnowcurrently at gmail.com>
wrote:

> > But maybe cPython itself could do an optimization step like that --
> examine
> > an entire expression for types, and if they all support the right
> > operations, re-structure it in a more optimized way.
>
> It's tricky in a dynamic language like Python to do a lot of
> optimization at compile time, particularly in the case in the face of
> operator overloading.


Exactly -- I was thinking this would be a run-time thing. Which  might make
for a major change, as I expect the compiler compiles expressions ahead of
time.

To some extent PyPy is state-of-the-art when in comes to
> optimizations, so it may be worth taking a look if the problem is
> tractable for Python in general.


That might be a way to go -- but it wold probably mean essentially
reimplementing numpy in PyPy -- which PyPy may, in fact already be doing!


> PEP 484 ("Type Hints") may help in a
> number of ways too.


yes, though at least for now -- it seem very focused on the use-case of
pre-run-time type checking (pr compile time, too), so it'll be a while...


>   I suppose there
> are a number of possibilities along that line, but I'm not an expert
> on the compiler and interpreter.


way out of my depth here, too.

Anyway, way OT for this thread -- _maybe_ I'll be able to compress these
thoughts down to start a conversion her about it some day...

-Chris




-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150213/53206bd1/attachment.html>

From Nikolaus at rath.org  Sat Feb 14 00:22:00 2015
From: Nikolaus at rath.org (Nikolaus Rath)
Date: Fri, 13 Feb 2015 15:22:00 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAN3CYHxm2xvNJ=SnE-9G2uX5GxQfe-Bdp3tvk_iab1SgkRj-Ng@mail.gmail.com>
 (Alexander Heger's message of "Sat, 14 Feb 2015 07:47:17 +1100")
References: <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
 <CACac1F_9v9tF-saVD_MHsrGfWRx6fyxxc+o2tff5HXRob5hxKQ@mail.gmail.com>
 <CAN3CYHxsSxuUX+WXkc3jtdSugEFEFm_H39mgF73ddF-U9Ktexw@mail.gmail.com>
 <20150213014805.GH2498@ando.pearwood.info>
 <CAN3CYHxm2xvNJ=SnE-9G2uX5GxQfe-Bdp3tvk_iab1SgkRj-Ng@mail.gmail.com>
Message-ID: <87zj8h8kiv.fsf@thinkpad.rath.org>

Alexander Heger <python-ePO413wvQzY at public.gmane.org> writes:
>> As far as I was, and still am, concerned, & is the obvious and most
>> natural operator for concatenation. [1, 2]+[3, 4] should return [4, 6],
>> and sum(bunch of lists) should be a meaningless operation, like
>> sum(bunch of HTTP servers). Or sum(bunch of dicts).
>
> This only works if the items *can* be added.  Dics should not make
> such an assumption. & is not a more natural operator, because, why
> would you then not just expect that [1, 2] & [3, 4] returns [1 & 2, 3
> & 4] == [0 , 0] ?  the same would be true for any operator you pick.

Which brings us back to the idea to introduce elementwise variants of
any operator:

[1,2] .+ [3,4] == [1+3, 2+4]
[1,2] + [3,4] == [1,2,3,4]
[1,2] .& [3,4] == [1 & 3, 2 & 4]
[1,2] & [3,4] == not (yet?) defined

As a regular numpy user, I'd be very happy about that too :-).


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             ?Time flies like an arrow, fruit flies like a Banana.?

From erik.m.bray at gmail.com  Sat Feb 14 00:29:53 2015
From: erik.m.bray at gmail.com (Erik Bray)
Date: Fri, 13 Feb 2015 18:29:53 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <87zj8h8kiv.fsf@thinkpad.rath.org>
References: <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
 <CACac1F_9v9tF-saVD_MHsrGfWRx6fyxxc+o2tff5HXRob5hxKQ@mail.gmail.com>
 <CAN3CYHxsSxuUX+WXkc3jtdSugEFEFm_H39mgF73ddF-U9Ktexw@mail.gmail.com>
 <20150213014805.GH2498@ando.pearwood.info>
 <CAN3CYHxm2xvNJ=SnE-9G2uX5GxQfe-Bdp3tvk_iab1SgkRj-Ng@mail.gmail.com>
 <87zj8h8kiv.fsf@thinkpad.rath.org>
Message-ID: <CAOTD34b=wUx7ie7bYJE2u+WdecOyMDEjnJd1ZC+yec8VyFo6gA@mail.gmail.com>

On Fri, Feb 13, 2015 at 6:22 PM, Nikolaus Rath <Nikolaus at rath.org> wrote:
> Alexander Heger <python-ePO413wvQzY at public.gmane.org> writes:
>>> As far as I was, and still am, concerned, & is the obvious and most
>>> natural operator for concatenation. [1, 2]+[3, 4] should return [4, 6],
>>> and sum(bunch of lists) should be a meaningless operation, like
>>> sum(bunch of HTTP servers). Or sum(bunch of dicts).
>>
>> This only works if the items *can* be added.  Dics should not make
>> such an assumption. & is not a more natural operator, because, why
>> would you then not just expect that [1, 2] & [3, 4] returns [1 & 2, 3
>> & 4] == [0 , 0] ?  the same would be true for any operator you pick.
>
> Which brings us back to the idea to introduce elementwise variants of
> any operator:
>
> [1,2] .+ [3,4] == [1+3, 2+4]
> [1,2] + [3,4] == [1,2,3,4]
> [1,2] .& [3,4] == [1 & 3, 2 & 4]
> [1,2] & [3,4] == not (yet?) defined
>
> As a regular numpy user, I'd be very happy about that too :-).

... except for the part where in Numpy ".+" and "+" and so one would
have to be identical, which would be no end of confusing especially
when adding, say, a Numpy array and a list.

But I agree in principle it would be nice to have element-wise
operators in the language.  I just fear it may be too late.

Erik

From joshua at landau.ws  Sat Feb 14 00:32:14 2015
From: joshua at landau.ws (Joshua Landau)
Date: Fri, 13 Feb 2015 23:32:14 +0000
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150213022405.GI2498@ando.pearwood.info>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info>
Message-ID: <CAN1F8qV9KerrS0pFM7Oy4AWzO4au59nbAikydO0162jiuei0dg@mail.gmail.com>

On 13 February 2015 at 02:24, Steven D'Aprano <steve at pearwood.info> wrote:
> On Fri, Feb 13, 2015 at 12:06:44AM +0000, Andrew Barnert wrote:
>> On Thursday, February 12, 2015 1:33 PM, Nathan Schneider <neatnate at gmail.com> wrote:
>>
>> > A reminder about PEP 448: Additional Unpacking Generalizations
>> > (https://www.python.org/dev/peps/pep-0448/), which claims that "it
>> > vastly simplifies types of 'addition' such as combining
>> > dictionaries, and does so in an unambiguous and well-defined way".
>> > The spelling for combining two dicts would be: {**d1, **d2}
>
> Very nice! That should be extensible to multiple arguments without the
> pathologically slow performance of repeated addition:
>
> {**d1, **d2, **d3, **d4}
>
> and you can select which dict you want to win in the event of clashes
> by changing the order.

Better than that, you can even do

    {a: b, **d1, c: d, **d2, e: f, <...etc>}

Best of all, it's implemented (albeit not yet accepted or reviewed):
http://bugs.python.org/issue2292.

From joshua at landau.ws  Sat Feb 14 00:34:18 2015
From: joshua at landau.ws (Joshua Landau)
Date: Fri, 13 Feb 2015 23:34:18 +0000
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAN1F8qV9KerrS0pFM7Oy4AWzO4au59nbAikydO0162jiuei0dg@mail.gmail.com>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info>
 <CAN1F8qV9KerrS0pFM7Oy4AWzO4au59nbAikydO0162jiuei0dg@mail.gmail.com>
Message-ID: <CAN1F8qW8SCXUvyW++QSvc-hckVUx1=t_wdxDFMAnEWA=kPQQnQ@mail.gmail.com>

On 13 February 2015 at 05:46, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Eric Snow wrote:
>>
>> However, the upside to
>> the PEP 448 syntax is that merging could be done without requiring an
>> additional intermediate dict.
>
> It can also detect duplicate keywords and give you
> a clear error message. The operator would at best
> give a rather more generic error message and at
> worst silently discard one of the duplicates.

Since {1: 'a', 1: 'a'} is currently valid, it didn't make much sense
to detect duplicates. Unpacking into keywords as f(**d1, **d2) does
detect duplicates, however.

From random832 at fastmail.us  Sat Feb 14 00:51:25 2015
From: random832 at fastmail.us (random832 at fastmail.us)
Date: Fri, 13 Feb 2015 18:51:25 -0500
Subject: [Python-ideas] Namespace creation with syntax short form
In-Reply-To: <0F99BA73-4206-4134-A2C2-E10A9A6B6777@yahoo.com>
References: <CAOtYm-Hx+9K-w3PzxxOtoQcE9OQOWvg9mYVfc3diBnAWxAkYGA@mail.gmail.com>
 <20150213104159.GQ2498@ando.pearwood.info> <mbkpa6$63l$1@ger.gmane.org>
 <D920348D-B937-4738-9FC8-39C211B1A143@yahoo.com>
 <mbktnl$mrl$1@ger.gmane.org>
 <0F99BA73-4206-4134-A2C2-E10A9A6B6777@yahoo.com>
Message-ID: <1423871485.1984445.227344865.1A12242C@webmail.messagingengine.com>

On Fri, Feb 13, 2015, at 17:16, Andrew Barnert wrote:
> OK, that makes this seem even _less_ useful for the stat case.
> 
> The reasons to use namedtuple/structseq in the first place for stat are
> (a) so existing code written to the original tuple interface can continue
> to treat the value as a tuple while newer code can access the new
> attributes that don't fit;

What I'm getting is: the reason to want a simple namespace syntax is to
get people to stop returning tuples for stuff like this in the first
place, so that there is no original tuple interface. The horse has
already bolted for the stat case.

From abarnert at yahoo.com  Sat Feb 14 01:12:11 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Fri, 13 Feb 2015 16:12:11 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <3781943608761054828@unknownmsgid>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <3781943608761054828@unknownmsgid>
Message-ID: <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>

On Feb 13, 2015, at 8:02, Chris Barker - NOAA Federal <chris.barker at noaa.gov> wrote:

>> On Feb 12, 2015, at 10:24 PM, Steven D'Aprano <steve at pearwood.info> wrote:
>>> += duplicates the extend method on lists.
>> 
>> Yes it does, and that sometimes causes confusion when people wonder why
>> alist += blist is not *quite* the same as alist = alist + blist.
> 
> Actually, that's the primary motivator for += and friends -- to
> support in-place operations on mutables. Notably numpy arrays.
> 
>> It also
>> leads to a quite ugly and unfortunate language wart with tuples:
>> 
>> py> t = ([], None)
>> py> t[0] += [1]
>> Traceback (most recent call last):
>> File "<stdin>", line 1, in <module>
>> TypeError: 'tuple' object does not support item assignment
>> py> t
>> ([1], None)
>> 
>> Try explaining to novices why this is not a bug.
> 
> I'm going to have to think about that a fair bit myself -- so yes,
> really confusing.
> 
> But the "problem" here is that augmented assignment shouldn't work on
> immutables at all.

The problem is that the way augmented assignment works is to first treat the target as an expression, then call __iadd__ on the result, then assign the result of that back to the target. So your "t[0] == [1]" turns into, in effect, "setitem(t, 0, getitem(t, 0).__iadd__([1]))". Once you see it that way, it's obvious why it works the way it does. And the official FAQ explains this, but it still seems to surprise plenty of people who aren't at even close to novices. (But that's probably more relevant to the other thread on namespace unpacking than to this thread.)

From abarnert at yahoo.com  Sat Feb 14 01:18:47 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Fri, 13 Feb 2015 16:18:47 -0800
Subject: [Python-ideas] Namespace creation with syntax short form
In-Reply-To: <1423871485.1984445.227344865.1A12242C@webmail.messagingengine.com>
References: <CAOtYm-Hx+9K-w3PzxxOtoQcE9OQOWvg9mYVfc3diBnAWxAkYGA@mail.gmail.com>
 <20150213104159.GQ2498@ando.pearwood.info> <mbkpa6$63l$1@ger.gmane.org>
 <D920348D-B937-4738-9FC8-39C211B1A143@yahoo.com> <mbktnl$mrl$1@ger.gmane.org>
 <0F99BA73-4206-4134-A2C2-E10A9A6B6777@yahoo.com>
 <1423871485.1984445.227344865.1A12242C@webmail.messagingengine.com>
Message-ID: <63780BD0-25A6-4A9A-A2C8-BE40C0FE7F0B@yahoo.com>

On Feb 13, 2015, at 15:51, random832 at fastmail.us wrote:

> On Fri, Feb 13, 2015, at 17:16, Andrew Barnert wrote:
>> OK, that makes this seem even _less_ useful for the stat case.
>> 
>> The reasons to use namedtuple/structseq in the first place for stat are
>> (a) so existing code written to the original tuple interface can continue
>> to treat the value as a tuple while newer code can access the new
>> attributes that don't fit;
> 
> What I'm getting is: the reason to want a simple namespace syntax is to
> get people to stop returning tuples for stuff like this in the first
> place, so that there is no original tuple interface. The horse has
> already bolted for the stat case.

But (as Greg Ewing said better than me), I don't think there really is a common use case for this at all.

Have you ever, say, designed a 2D graphics package and decided to pass around 2-tuples instead of point objects just because it was too hard to create a Point class?

And yes, I realize that particular example sucks. But my point is that every example I can think of sucks, and neither the OP nor anyone else has thought of one that doesn't.

(And stat, in particular, is not a good example. It was added to a much simpler and weaker language than today's Python, and it was coded in C, not Python, so the reasons for its design aren't likely to be relevant here in the first place.)

From chris.barker at noaa.gov  Sat Feb 14 01:32:46 2015
From: chris.barker at noaa.gov (Chris Barker - NOAA Federal)
Date: Fri, 13 Feb 2015 16:32:46 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
Message-ID: <-2223593909575252041@unknownmsgid>

>> But the "problem" here is that augmented assignment shouldn't work on
>> immutables at all.
>
> The problem is that the way augmented assignment works is to first treat the target as an expression, then call __iadd__ on the result, then assign the result of that back to the target.

Right - I figured that out, and found the FAQ. But why does it work
that way? Because it needs to work on immutable objects. If it didn't,
then you wouldn't need the "assign back to the original name" step.

Let's look at __iadd__:

If it's in-place for a mutable object, it needs to return self. But
the python standard practice is that methods that mutate objects
shouldn't return self ( like list.sort() ) for instance.

But if you want the augmented assignment operators to support
immutables, there needs to be a way to get a new object back -- thus
the bind, and the confusing, limiting behavior.

But it's what we've got.

Is there a lesson here for the additional unpacking being proposed? I
have no idea.

-Chris

- CHB

From rob.cliffe at btinternet.com  Sat Feb 14 01:51:43 2015
From: rob.cliffe at btinternet.com (Rob Cliffe)
Date: Sat, 14 Feb 2015 00:51:43 +0000
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150213061904.GL2498@ando.pearwood.info>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info>
Message-ID: <54DE9C1F.7000504@btinternet.com>


On 13/02/2015 06:19, Steven D'Aprano wrote:
> On Thu, Feb 12, 2015 at 07:43:36PM -0800, Chris Barker - NOAA Federal wrote:
>>> avoids any confusion over operators and having += duplicating the
>>> update method.
>> += duplicates the extend method on lists.
> Yes it does, and that sometimes causes confusion when people wonder why
> alist += blist is not *quite* the same as alist = alist + blist. It also
> leads to a quite ugly and unfortunate language wart with tuples:
>
> py> t = ([], None)
> py> t[0] += [1]
> Traceback (most recent call last):
>    File "<stdin>", line 1, in <module>
> TypeError: 'tuple' object does not support item assignment
> py> t
> ([1], None)
>
> Try explaining to novices why this is not a bug.
Er, I'm not a novice, but why isn't that a bug?  I would have expected t 
to be altered as shown without raising an exception. (And given that the 
code does raise an exception, I find it surprising that it otherwise has 
the presumably intended effect.) t[0] is a list, not a tuple, so I would 
expect it to behave as a list:

 >>> L = t[0]
 >>> L += [2]
 >>> t
([1,2], None)

I was also curious why you say that alist += blist is not quite the same 
as alist = alist + blist.
(So far I've worked out that the first uses __iadd__ and the second uses 
__add__.  And there is probably a performance difference.  And the 
semantics could be different for objects of a custom class, but as far 
as I can see they should be the same for list objects.  What have I missed?)
I would appreciate it if you or someone else can find the time to answer.
Rob Cliffe



From chris.barker at noaa.gov  Sat Feb 14 01:46:21 2015
From: chris.barker at noaa.gov (Chris Barker - NOAA Federal)
Date: Fri, 13 Feb 2015 16:46:21 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAOTD34b=wUx7ie7bYJE2u+WdecOyMDEjnJd1ZC+yec8VyFo6gA@mail.gmail.com>
References: <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info> <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
 <CACac1F_9v9tF-saVD_MHsrGfWRx6fyxxc+o2tff5HXRob5hxKQ@mail.gmail.com>
 <CAN3CYHxsSxuUX+WXkc3jtdSugEFEFm_H39mgF73ddF-U9Ktexw@mail.gmail.com>
 <20150213014805.GH2498@ando.pearwood.info>
 <CAN3CYHxm2xvNJ=SnE-9G2uX5GxQfe-Bdp3tvk_iab1SgkRj-Ng@mail.gmail.com>
 <87zj8h8kiv.fsf@thinkpad.rath.org>
 <CAOTD34b=wUx7ie7bYJE2u+WdecOyMDEjnJd1ZC+yec8VyFo6gA@mail.gmail.com>
Message-ID: <-2638738140795248968@unknownmsgid>

>> As a regular numpy user, I'd be very happy about that too :-).

I wouldn't -- I hated that in Matlab.

It turns out the only ace it's really useful is the two kinds of
multiplication -- and we've now got the @ operator for that.

> .  I just fear it may be too late.

In some sense, it's too late _not_ to have them. It's been argued in
this thread that overloading  the math operators for things that
aren't clearly math was a mistake. In which case they could be used
for element-wise operations. But python wasn't designed to be an array
oriented language -- so this is where we are.

-Chris

From greg.ewing at canterbury.ac.nz  Sat Feb 14 02:19:18 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 14 Feb 2015 14:19:18 +1300
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <-2223593909575252041@unknownmsgid>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid>
Message-ID: <54DEA296.2040906@canterbury.ac.nz>

Chris Barker - NOAA Federal wrote:
> But why does it work
> that way? Because it needs to work on immutable objects. If it didn't,
> then you wouldn't need the "assign back to the original name" step.

This doesn't make it wrong for in-place operators to
work on immutable objects. There are two distinct use
cases:

1) You want to update a mutable object in-place.

2) The LHS is a complex expression that you only want
    to write out and evaluate once.

Case (2) applies equally well to mutable and immutable
objects.

There are ways that the tuple problem could be fixed, such
as skipping the assignment if __iadd__ returns the same
object. But that would be a backwards-incompatible change,
since there could be code that relies on the assignment
always happening.

> If it's in-place for a mutable object, it needs to return self. But
> the python standard practice is that methods that mutate objects
> shouldn't return self ( like list.sort() ) for instance.

The reason for that is to catch the mistake of using a
mutating method when you meant to use a non-mutating one.
That doesn't apply to __iadd__, because you don't
usually call it yourself.

-- 
Greg

From abarnert at yahoo.com  Sat Feb 14 03:02:42 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Fri, 13 Feb 2015 18:02:42 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DE9C1F.7000504@btinternet.com>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <54DE9C1F.7000504@btinternet.com>
Message-ID: <C0678794-81EB-477B-87C2-8AE1655E60A9@yahoo.com>

On Feb 13, 2015, at 16:51, Rob Cliffe <rob.cliffe at btinternet.com> wrote:

> On 13/02/2015 06:19, Steven D'Aprano wrote:
>> On Thu, Feb 12, 2015 at 07:43:36PM -0800, Chris Barker - NOAA Federal wrote:
>>>> avoids any confusion over operators and having += duplicating the
>>>> update method.
>>> += duplicates the extend method on lists.
>> Yes it does, and that sometimes causes confusion when people wonder why
>> alist += blist is not *quite* the same as alist = alist + blist. It also
>> leads to a quite ugly and unfortunate language wart with tuples:
>> 
>> py> t = ([], None)
>> py> t[0] += [1]
>> Traceback (most recent call last):
>>   File "<stdin>", line 1, in <module>
>> TypeError: 'tuple' object does not support item assignment
>> py> t
>> ([1], None)
>> 
>> Try explaining to novices why this is not a bug.
> Er, I'm not a novice, but why isn't that a bug?  I would have expected t to be altered as shown without raising an exception. (And given that the code does raise an exception, I find it surprising that it otherwise has the presumably intended effect.) t[0] is a list, not a tuple, so I would expect it to behave as a list:

Right, t[0] behaves like a list. But assigning (augmented or otherwise) to t[0] isn't an operation on the value t[0], it's an operation on the value t (in particular, `t.__setitem__(0, newval)`).

Augmented assignment does three things: it fetches the existing value, calls the __iadd__ operator on it (or __add__ if there's no __iadd__, which is how it works for immutable values), then stores the resulting new value. For this case, the first two steps succeed, and the third fails.

There's really no good alternative here. Either you don't allow augmented assignment, you don't allow it to work with immutable values (so `t=[0]; t[0] += 1` fails), or you add reference types and overloadable assignment operators and the whole mess that comes with them (as seen in C++) instead of Python's nifty complex assignment targets.

> >>> L = t[0]
> >>> L += [2]
> >>> t
> ([1,2], None)
> 
> I was also curious why you say that alist += blist is not quite the same as alist = alist + blist.
> (So far I've worked out that the first uses __iadd__ and the second uses __add__.  And there is probably a performance difference.  And the semantics could be different for objects of a custom class, but as far as I can see they should be the same for list objects.  What have I missed?)

Consider this case:

    >>> clist = [0]
    >>> alist = clist
    >>> blist = [1]

Now, compare:

    >>> alist = alist + blist
    >>> clist
    [0]

... versus:

    >>> alist += blist
    >>> clist
    [0, 1]

And that's because the semantics _are_ different for __add__ vs. __iadd__ for a list: the former creates and returns a new list, the latter modifies and returns self, and that's clearly visible (and often important to your code!) if there are any other references to self out there. That's the real reason we need __iadd__, not efficiency.

> I would appreciate it if you or someone else can find the time to answer.
> Rob Cliffe
> 
> 
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From abarnert at yahoo.com  Sat Feb 14 03:12:42 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Fri, 13 Feb 2015 18:12:42 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DEA296.2040906@canterbury.ac.nz>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid> <54DEA296.2040906@canterbury.ac.nz>
Message-ID: <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>

On Feb 13, 2015, at 17:19, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:

> Chris Barker - NOAA Federal wrote:
>> But why does it work
>> that way? Because it needs to work on immutable objects. If it didn't,
>> then you wouldn't need the "assign back to the original name" step.
> 
> This doesn't make it wrong for in-place operators to
> work on immutable objects. There are two distinct use
> cases:
> 
> 1) You want to update a mutable object in-place.
> 
> 2) The LHS is a complex expression that you only want
>   to write out and evaluate once.
> 
> Case (2) applies equally well to mutable and immutable
> objects.
> 
> There are ways that the tuple problem could be fixed, such
> as skipping the assignment if __iadd__ returns the same
> object. But that would be a backwards-incompatible change,
> since there could be code that relies on the assignment
> always happening.

I think it's reasonable for a target to be able to assume that it will get a setattr or setitem when one of its subobjects is assigned to. You might need to throw out cached computed properties, or move the element in sorted order, or call a trace-on-change callback, etc. This change would mean that augmented assignment isn't really assignment at all. (And if you don't _want_ assignment, you can always call t[0].extend([1]) instead of t[0] += [1].)

>> If it's in-place for a mutable object, it needs to return self. But
>> the python standard practice is that methods that mutate objects
>> shouldn't return self ( like list.sort() ) for instance.
> 
> The reason for that is to catch the mistake of using a
> mutating method when you meant to use a non-mutating one.
> That doesn't apply to __iadd__, because you don't
> usually call it yourself.

I think there's another big distinction: a method call is an expression, not a statement, so if it returned self, you could chain multiple mutations into a single statement. Because it returns None, you get only one mutation per statement. And it's the leftmost thing in the statement that's being mutated, except in uncommon or pathological code.

Unlike many other languages, assignment (including augmented assignment) is a statement in Python, so that isn't an issue. So you still only get one mutation per statement, and to the leftmost thing, when using augmented assignment. (And, as you say, you usually don't call __iadd__ itself; if you really wanted to call it explicitly and chain calls together, you could, but that falls under pathological code.)

From steve at pearwood.info  Sat Feb 14 03:19:19 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Sat, 14 Feb 2015 13:19:19 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <3781943608761054828@unknownmsgid>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <3781943608761054828@unknownmsgid>
Message-ID: <20150214021918.GS2498@ando.pearwood.info>

On Fri, Feb 13, 2015 at 08:02:56AM -0800, Chris Barker - NOAA Federal wrote:
> > On Feb 12, 2015, at 10:24 PM, Steven D'Aprano <steve at pearwood.info> wrote:
> >> += duplicates the extend method on lists.
> >
> > Yes it does, and that sometimes causes confusion when people wonder why
> > alist += blist is not *quite* the same as alist = alist + blist.
> 
> Actually, that's the primary motivator for += and friends -- to
> support in-place operations on mutables. Notably numpy arrays.

I'm not sure that implementing __iadd__ for numpy arrays is much harder 
than implementing an iadd method, and there isn't that much difference 
in readability between these two possible alternatives:

    data += some_numpy_array
    data.iadd(some_numpy_array)

If the motivation for augmented assignment was "numpy users don't want 
to type a method name", then I think that pandering to them was a poor 
decision.

I don't remember the discussion leading up to augmented assignment being 
added to the language, but I remember that *before* it was added there 
was a steady stream of people asking why they couldn't write:

    n += 1

like in C. (And a smaller number wanting to write ++n and n++ too, but 
fortunately nowhere near as many.)


[...]
> But the "problem" here is that augmented assignment shouldn't work on
> immutables at all.

That isn't the problem. Having augmented assignment work with immutables 
works fine. Even tuples can work perfectly:

py> t = ([1, 2, 3], "spam")
py> t += (None,)
py> t
([1, 2, 3], 'spam', None)

I might have been tempted to say that the *real* problem is mutables, 
not immutables, but even that is not the case:

py> t[0][2] += 1000
py> t
([1, 2, 1003], 'spam', None)

Perhaps the actual problem is that objects have no way of signalling to 
Python that they have, or have not, performed an in-place mutation, so 
that Python knows whether or not to try re-binding to the left hand 
reference. Or perhaps the real problem is that Python is so dynamic that 
it cannot tell whether a binding operation will succeed. Or that after a 
binding fails, that it cannot tell whether or not to catch and suppress 
the exception. There's no shortage of candidates for "the real problem".


> But then we wouldn't have the too appealing to resist syntactic sugar
> for integer incrementing.
> 
> But are you saying that augmented assignment was simply a mistake
> altogether, and therefore no new use of it should be added at all (
> and you'd deprecate it if you could)?

What's done is done. I certainly wouldn't deprecate it now. If += was a 
mistake, then fixing that mistake is more painful than living with it. 
Is it a mistake? I don't know. I guess that depends on whether you get 
more value from being able to write += then you get from having to deal 
with corner cases that fail.

The broader lesson from augmented assignment is that syntax and 
semantics are not entirely independent. Syntax that is unproblematic in 
C, with C's semantics, has painful corner cases with Python's semantics.

The narrower lesson is that I am cautious about using operators when a 
method or function can do. Saving a few keystrokes is not sufficient. 
Bringing it back to dicts:

    settings.printer_config.update(global_settings, user_settings)

is obvious and easy and will not fail if blob is a named tuple or 
printer_config is a read-only property (for example), while:

    settings.printer_config += global_settings + user_settings

may succeed and yet still raise an exception.


-- 
Steve

From abarnert at yahoo.com  Sat Feb 14 03:18:32 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Fri, 13 Feb 2015 18:18:32 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <C0678794-81EB-477B-87C2-8AE1655E60A9@yahoo.com>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <54DE9C1F.7000504@btinternet.com>
 <C0678794-81EB-477B-87C2-8AE1655E60A9@yahoo.com>
Message-ID: <B2E162CC-CCDF-4F66-8845-B2D708F00DDB@yahoo.com>

On Feb 13, 2015, at 18:02, Andrew Barnert <abarnert at yahoo.com.dmarc.invalid> wrote:

> On Feb 13, 2015, at 16:51, Rob Cliffe <rob.cliffe at btinternet.com> wrote:
> 
>> On 13/02/2015 06:19, Steven D'Aprano wrote:
>>> On Thu, Feb 12, 2015 at 07:43:36PM -0800, Chris Barker - NOAA Federal wrote:
>>>>> avoids any confusion over operators and having += duplicating the
>>>>> update method.
>>>> += duplicates the extend method on lists.
>>> Yes it does, and that sometimes causes confusion when people wonder why
>>> alist += blist is not *quite* the same as alist = alist + blist. It also
>>> leads to a quite ugly and unfortunate language wart with tuples:
>>> 
>>> py> t = ([], None)
>>> py> t[0] += [1]
>>> Traceback (most recent call last):
>>>  File "<stdin>", line 1, in <module>
>>> TypeError: 'tuple' object does not support item assignment
>>> py> t
>>> ([1], None)
>>> 
>>> Try explaining to novices why this is not a bug.
>> Er, I'm not a novice, but why isn't that a bug?  I would have expected t to be altered as shown without raising an exception. (And given that the code does raise an exception, I find it surprising that it otherwise has the presumably intended effect.) t[0] is a list, not a tuple, so I would expect it to behave as a list:
> 
> Right, t[0] behaves like a list. But assigning (augmented or otherwise) to t[0] isn't an operation on the value t[0], it's an operation on the value t (in particular, `t.__setitem__(0, newval)`).
> 
> Augmented assignment does three things: it fetches the existing value, calls the __iadd__ operator on it (or __add__ if there's no __iadd__, which is how it works for immutable values), then stores the resulting new value. For this case, the first two steps succeed, and the third fails.
> 
> There's really no good alternative here. Either you don't allow augmented assignment, you don't allow it to work with immutable values (so `t=[0]; t[0] += 1` fails), or you add reference types and overloadable assignment operators and the whole mess that comes with them (as seen in C++) instead of Python's nifty complex assignment targets.
> 
>>>>> L = t[0]
>>>>> L += [2]
>>>>> t
>> ([1,2], None)
>> 
>> I was also curious why you say that alist += blist is not quite the same as alist = alist + blist.
>> (So far I've worked out that the first uses __iadd__ and the second uses __add__.  And there is probably a performance difference.  And the semantics could be different for objects of a custom class, but as far as I can see they should be the same for list objects.  What have I missed?)
> 
> Consider this case:
> 
>>>> clist = [0]
>>>> alist = clist
>>>> blist = [1]
> 
> Now, compare:
> 
>>>> alist = alist + blist
>>>> clist
>    [0]
> 
> ... versus:
> 
>>>> alist += blist
>>>> clist
>    [0, 1]
> 
> And that's because the semantics _are_ different for __add__ vs. __iadd__ for a list: the former creates and returns a new list, the latter modifies and returns self, and that's clearly visible (and often important to your code!) if there are any other references to self out there. That's the real reason we need __iadd__, not efficiency.
> 
>> I would appreciate it if you or someone else can find the time to answer.

I found an old partial explanation I'd started a year or two ago that you might find useful (or, more likely, you'll find it confusing, but then maybe you can help me improve the explanation for future readers...) and posted it online (http://stupidpythonideas.blogspot.com/2015/02/augmented-assignments-b.html).

In case you're wondering, it was originally intended as a StackOverflow answer to someone who wanted a more detailed explanation than the FAQ gives (and he brought up C++ for comparison), but then the question was closed before I could post it.

From greg.ewing at canterbury.ac.nz  Sat Feb 14 03:46:20 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 14 Feb 2015 15:46:20 +1300
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid> <54DEA296.2040906@canterbury.ac.nz>
 <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
Message-ID: <54DEB6FC.3050405@canterbury.ac.nz>

Andrew Barnert wrote:

> I think it's reasonable for a target to be able to assume that it will get a
> setattr or setitem when one of its subobjects is assigned to. You might need
> to throw out cached computed properties, ...

That's what I was thinking. But I'm not sure it would be
a good design, since it would *only* catch mutations made
through in-place operators. You would need some other
mechanism for detecting mutations of sub-objects made in
other ways, and whatever mechanism you used for that would
probably catch in-place operations as well.

-- 
Greg

From abarnert at yahoo.com  Sat Feb 14 03:53:17 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Fri, 13 Feb 2015 18:53:17 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150214021918.GS2498@ando.pearwood.info>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <3781943608761054828@unknownmsgid>
 <20150214021918.GS2498@ando.pearwood.info>
Message-ID: <FADC7107-FA06-440E-BC8B-2F937F835C0F@yahoo.com>

On Feb 13, 2015, at 18:19, Steven D'Aprano <steve at pearwood.info> wrote:

> The narrower lesson is that I am cautious about using operators when a 
> method or function can do. Saving a few keystrokes is not sufficient. 
> Bringing it back to dicts:
> 
>    settings.printer_config.update(global_settings, user_settings)
> 
> is obvious and easy and will not fail if blob is a named tuple or 
> printer_config is a read-only property (for example), while:
> 
>    settings.printer_config += global_settings + user_settings
> 
> may succeed and yet still raise an exception.

I think another way to look at this is that += is an assignment, while update isn't. In this case, you don't really want to assign anything to settings.printer_config, you want to modify what's there. So a method call like update makes more sense, even if it's a bit more verbose. If you always think things through like that, you'll never run into the "tuple wart".

Of course your point about C is well-taken: assignment in C (and even more so in descendants like C++) is a very different thing from assignment in Python, and given how many Python developers come from a C-derived language, it's not surprising that people have inappropriate intuitions about when += is and isn't appropriate, and occasionally run into this problem.

On the third hand, the problem doesn't come up very often in practice--that's why so many people don't run into it until they're already pretty experienced (and are therefore even more surprised), which implies that maybe it's not that important to worry about in the first place...

From abarnert at yahoo.com  Sat Feb 14 03:57:43 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Fri, 13 Feb 2015 18:57:43 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DEB6FC.3050405@canterbury.ac.nz>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid> <54DEA296.2040906@canterbury.ac.nz>
 <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
 <54DEB6FC.3050405@canterbury.ac.nz>
Message-ID: <4BE246CC-0208-4675-B783-F217B772CCF1@yahoo.com>

On Feb 13, 2015, at 18:46, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:

> Andrew Barnert wrote:
> 
>> I think it's reasonable for a target to be able to assume that it will get a
>> setattr or setitem when one of its subobjects is assigned to. You might need
>> to throw out cached computed properties, ...
> 
> That's what I was thinking. But I'm not sure it would be
> a good design,

Now I'm confused.

The current design of Python guarantees that an object always gets a setattr or setitem when one of its elements is assigned to. That's an important property, for the reasons I suggested above. So any change would have to preserve that property. And skipping assignment when __iadd__ returns self would not preserve that property. So it's not just backward-incompatible, it's bad.

So, I don't know what the "it" you're suggesting as an alternative is. Not calling setitem/setattr, but instead calling some other method? If that method is just a post-change item_was_modified call I don't think it's sufficient, while if it's a pre-change item_will_be_modified(name, newval) that lets you change or reject the mutation it's equivalent to the existing setitem.

> since it would *only* catch mutations made
> through in-place operators. You would need some other
> mechanism for detecting mutations of sub-objects made in
> other ways, and whatever mechanism you used for that would
> probably catch in-place operations as well.
> 
> -- 
> Greg
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From steve at pearwood.info  Sat Feb 14 04:05:37 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Sat, 14 Feb 2015 14:05:37 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DE76BA.6060607@canterbury.ac.nz>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <54DD1A0B.9000202@canterbury.ac.nz> <mbkilf$mie$1@ger.gmane.org>
 <54DE76BA.6060607@canterbury.ac.nz>
Message-ID: <20150214030537.GT2498@ando.pearwood.info>

On Sat, Feb 14, 2015 at 11:12:10AM +1300, Greg Ewing wrote:
> Stefan Behnel wrote:
> >Arithmetic expressions are always
> >evaluated from left to right. It thus seems obvious to me what gets created
> >first, and what gets added afterwards (and overwrites what's there).
> 
> I'm okay with "added afterwards", but the "overwrites"
> part is *not* obvious to me.
[snip yet another shopping list example]

I think that focusing on the question of "which wins" in the case of 
duplicate keys, the left or right operand, is not very productive. We 
can just declare that Python semantics are that the last value seen 
wins, which is the current behaviour for dict.update.

Subclasses can override that, if they wish, but the easiest way is to 
just swap the order of the operands. Instead of a.update(b), use 
b.update(a). (If you're worried about contaminating other references to 
b, make a copy of it first. Whatever.)

dict.update does what it does very well. It solves 90% of the "update in 
place" problems. Specialist subclasses, like a multiset or a Counter, 
can solve the other 90% *wink*. What these requests for "dict addition" 
are really asking about is an easy way to solve the "copy and update" 
problem. There are a few answers:


(1) Not everything needs to be a one-liner or an expression. Just copy 
and update yourself, it's not hard.

(2) Write your own helper function. It's a three line function. Not 
everything needs to be a built-in:

    def copy_and_update(a, b):
        new = a.copy()
        new.update(b)
        return new

Add bells and whistles to taste.

(3) Some people think it should be an operator. There are strong 
feelings about which operator: + | ^ << have all been suggested. 
Operators have some disadvantages: they can only take two arguments, so 
copy-and-updating a series of dicts means making repeated copies which 
are thrown away.

There's the question of different types -- if you merge a SpamMapping 
and an EggsMapping and a CheeseMapping, what result do you get? If you 
care about the specific type, do you have to make yet another copy at 
the end to ensure you get the type you want?

    SpamMapping(a + b + c)  # or a | b | c

We shouldn't care about small efficiencies, say, 90% of the time, but 
this is starting to look a little worrisome, since it may lead to near 
quadratic behaviour.

Using an operator means that you get augmented assignment for free, even 
if you don't define an __iadd__ or __ior__ method. But augmented 
assignment isn't entirely problem-free. If your mapping is embedded in 
an immutable data structure, then:

    structure.mapping |= other  # or += if you prefer

may *succeed and yet raise an exception*. That's a nasty language wart. 
To me, that's a good reason to look for an alternative.

(4) So perhaps a method on Mapping is a better idea. That makes the 
return type obvious: it should be type(self). It allows implementations 
to be efficient in the face of multiple arguments, by avoiding the 
creation of temporary objects which are then copied and thrown away. 
Being a method, they can take keyword arguments too.

But the obvious name, updated(), is uncomfortably close to update() and 
perhaps easily confused with it. The next most obvious name, 
copy_and_update(), is a little long for my tastes.


(5) How about a function? It need not be a built-in, although there is 
precedence with sorted() and reversed(). It could be collections.updated().

With a function, it's a little less obvious what the return type should 
be. Perhaps the rule should be, if the first argument is a Mapping, use 
the type of that first argument. Otherwise use dict.

(6) Or we can use the Mapping constructor. We're already part way there: 

py> dict({'a': 1, 'b': 2, 'c': 3}, a=9999)
{'c': 3, 'b': 2, 'a': 9999}


The constuctor makes a copy of the argument, and updates it with any 
keyword args. If it would take multiple arguments, that's the 
copy-and-update semantics we're after.

The downside is that a few mappings -- defaultdict immediately comes to 
mind -- have a constuctor with a radically different signature. So you 
can't say:

    defaultdict(a, b, c, d)

(Personally, I think that changing the constructor signature like that 
is a mistake. But I'm not sure what alternatives there are.)


My sense of this is that using the constructor is the right solution, 
and for mappings with unusual signatures, consenting adults applies. For 
them, you have to do it the old-fashioned way:

    d = defaultdict(func)
    d.update(a, b, c, d)


I don't care about solving this for every obscure mapping type in the 
universe. Even obscure mapping types in the standard library :-)


-- 
Steve

From abarnert at yahoo.com  Sat Feb 14 04:37:30 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Fri, 13 Feb 2015 19:37:30 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150214030537.GT2498@ando.pearwood.info>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <54DD1A0B.9000202@canterbury.ac.nz> <mbkilf$mie$1@ger.gmane.org>
 <54DE76BA.6060607@canterbury.ac.nz>
 <20150214030537.GT2498@ando.pearwood.info>
Message-ID: <B13C9F55-FD11-4740-8C3C-4DF0B878C851@yahoo.com>


On Feb 13, 2015, at 19:05, Steven D'Aprano <steve at pearwood.info> wrote:

> On Sat, Feb 14, 2015 at 11:12:10AM +1300, Greg Ewing wrote:
>> Stefan Behnel wrote:
>>> Arithmetic expressions are always
>>> evaluated from left to right. It thus seems obvious to me what gets created
>>> first, and what gets added afterwards (and overwrites what's there).
>> 
>> I'm okay with "added afterwards", but the "overwrites"
>> part is *not* obvious to me.
> [snip yet another shopping list example]
> 
> I think that focusing on the question of "which wins" in the case of 
> duplicate keys, the left or right operand, is not very productive. We 
> can just declare that Python semantics are that the last value seen 
> wins, which is the current behaviour for dict.update.
> 
> Subclasses can override that, if they wish, but the easiest way is to 
> just swap the order of the operands. Instead of a.update(b), use 
> b.update(a). (If you're worried about contaminating other references to 
> b, make a copy of it first. Whatever.)
> 
> dict.update does what it does very well. It solves 90% of the "update in 
> place" problems. Specialist subclasses, like a multiset or a Counter, 
> can solve the other 90% *wink*. What these requests for "dict addition" 
> are really asking about is an easy way to solve the "copy and update" 
> problem. There are a few answers:
> 
> 
> (1) Not everything needs to be a one-liner or an expression. Just copy 
> and update yourself, it's not hard.
> 
> (2) Write your own helper function. It's a three line function. Not 
> everything needs to be a built-in:
> 
>    def copy_and_update(a, b):
>        new = a.copy()
>        new.update(b)
>        return new
> 
> Add bells and whistles to taste.
> 
> (3) Some people think it should be an operator. There are strong 
> feelings about which operator: + | ^ << have all been suggested. 
> Operators have some disadvantages: they can only take two arguments, so 
> copy-and-updating a series of dicts means making repeated copies which 
> are thrown away.
> 
> There's the question of different types -- if you merge a SpamMapping 
> and an EggsMapping and a CheeseMapping, what result do you get? If you 
> care about the specific type, do you have to make yet another copy at 
> the end to ensure you get the type you want?
> 
>    SpamMapping(a + b + c)  # or a | b | c
> 
> We shouldn't care about small efficiencies, say, 90% of the time, but 
> this is starting to look a little worrisome, since it may lead to near 
> quadratic behaviour.
> 
> Using an operator means that you get augmented assignment for free, even 
> if you don't define an __iadd__ or __ior__ method. But augmented 
> assignment isn't entirely problem-free. If your mapping is embedded in 
> an immutable data structure, then:
> 
>    structure.mapping |= other  # or += if you prefer
> 
> may *succeed and yet raise an exception*. That's a nasty language wart. 
> To me, that's a good reason to look for an alternative.
> 
> (4) So perhaps a method on Mapping is a better idea. That makes the 
> return type obvious: it should be type(self). It allows implementations 
> to be efficient in the face of multiple arguments, by avoiding the 
> creation of temporary objects which are then copied and thrown away. 
> Being a method, they can take keyword arguments too.
> 
> But the obvious name, updated(), is uncomfortably close to update() and 
> perhaps easily confused with it. The next most obvious name, 
> copy_and_update(), is a little long for my tastes.
> 
> 
> (5) How about a function? It need not be a built-in, although there is 
> precedence with sorted() and reversed(). It could be collections.updated().
> 
> With a function, it's a little less obvious what the return type should 
> be. Perhaps the rule should be, if the first argument is a Mapping, use 
> the type of that first argument. Otherwise use dict.
> 
> (6) Or we can use the Mapping constructor. We're already part way there: 
> 
> py> dict({'a': 1, 'b': 2, 'c': 3}, a=9999)
> {'c': 3, 'b': 2, 'a': 9999}
> 
> 
> The constuctor makes a copy of the argument, and updates it with any 
> keyword args. If it would take multiple arguments, that's the 
> copy-and-update semantics we're after.
> 
> The downside is that a few mappings -- defaultdict immediately comes to 
> mind -- have a constuctor with a radically different signature. So you 
> can't say:
> 
>    defaultdict(a, b, c, d)

Actually, that isn't a problem. defaultdict doesn't have a radically different signature at all, it just has one special parameter, followed by whatever other stuff dict wants. The docs make it clear that whatever's there will be treated exactly the same as dict, and both the C implementation and the old Python-equivalent recipe both do this by effectively doing self.default_factory = args[0]; super().__init__(*args[1:], **kwargs).

So there is no real problem at all with this suggestion. Which is why it's now my favorite of the bunch. Even if we get the generalized unpacking (which has other merits), I'd still like this one.

Of course it only fixes dict and a very small handful of classes (including defaultdict, but not much else) that inherit or encapsulate a dict and delegate blindly. Every other class--OrderedDict and UserDict, all the third-party mappings, etc.--have to be changed as well or they don't get the change. (And the Mapping mixin can't help here, as the collections.abc classes don't provide any construction behavior at all.)

But that's fine. Implementing (Mutable)Mapping doesn't guarantee that you are 100% like a dict, only that you're a (mutable) mapping. (For example, you already don't get a copy method.) If some mappings can be constructed from multiple mapping-or-iterator arguments and some can't, so what?

That being said, I think UserDict _does_ need to be fixed as a special case, because its two purposes are (a) to exactly simulate dict as far as possible, and (b) to serve as sample code for people writing their own mappings.

And since OrderedDict is pretty trivial to change, I'd probably do that too. But I wouldn't hunt through the stdlib looking for any special-purpose mappings, or add code that tries to warn on non-compliant third-party mappings, or even add anything to the documentation about the expected constructor signature (we don't currently explain how to take a single mapping or iterable-of-pairs plus keywords, or similarly for any other collections constructors, except the special case of Set._from_iterable, and only because that's needed to make set operators work).

However, if you want update too (as you seem to, given your defaultdict suggestion), I _would_ change the MutableMapping.update default implementation (which automatically fixes the other collections mappings that aren't fixed by dict). That seems like a small gain for a minuscule cost, so why not?

> (Personally, I think that changing the constructor signature like that 
> is a mistake. But I'm not sure what alternatives there are.)
> 
> 
> My sense of this is that using the constructor is the right solution, 
> and for mappings with unusual signatures, consenting adults applies. For 
> them, you have to do it the old-fashioned way:
> 
>    d = defaultdict(func)
>    d.update(a, b, c, d)

But d = defaultdict(func, a, b, c, d) will already just work.

And if you want to preserve the first (or last, or whatever) one's default factory, that's trivial to do, and explicit without being horribly verbose: d = defaultdict(a.default_factory, a, b, c, d).

> I don't care about solving this for every obscure mapping type in the 
> universe. Even obscure mapping types in the standard library :-)
> 
> 
> -- 
> Steve
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From Nikolaus at rath.org  Sat Feb 14 05:04:23 2015
From: Nikolaus at rath.org (Nikolaus Rath)
Date: Fri, 13 Feb 2015 20:04:23 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAOTD34b=wUx7ie7bYJE2u+WdecOyMDEjnJd1ZC+yec8VyFo6gA@mail.gmail.com>
 (Erik Bray's message of "Fri, 13 Feb 2015 18:29:53 -0500")
References: <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
 <CACac1F_9v9tF-saVD_MHsrGfWRx6fyxxc+o2tff5HXRob5hxKQ@mail.gmail.com>
 <CAN3CYHxsSxuUX+WXkc3jtdSugEFEFm_H39mgF73ddF-U9Ktexw@mail.gmail.com>
 <20150213014805.GH2498@ando.pearwood.info>
 <CAN3CYHxm2xvNJ=SnE-9G2uX5GxQfe-Bdp3tvk_iab1SgkRj-Ng@mail.gmail.com>
 <87zj8h8kiv.fsf@thinkpad.rath.org>
 <CAOTD34b=wUx7ie7bYJE2u+WdecOyMDEjnJd1ZC+yec8VyFo6gA@mail.gmail.com>
Message-ID: <8761b5f8ag.fsf@vostro.rath.org>

On Feb 13 2015, Erik Bray <erik.m.bray-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> wrote:
> On Fri, Feb 13, 2015 at 6:22 PM, Nikolaus Rath <Nikolaus-BTH8mxji4b0 at public.gmane.org> wrote:
>> Alexander Heger <python-ePO413wvQzY-XMD5yJDbdMReXY1tMh2IBg at public.gmane.org> writes:
>>>> As far as I was, and still am, concerned, & is the obvious and most
>>>> natural operator for concatenation. [1, 2]+[3, 4] should return [4, 6],
>>>> and sum(bunch of lists) should be a meaningless operation, like
>>>> sum(bunch of HTTP servers). Or sum(bunch of dicts).
>>>
>>> This only works if the items *can* be added.  Dics should not make
>>> such an assumption. & is not a more natural operator, because, why
>>> would you then not just expect that [1, 2] & [3, 4] returns [1 & 2, 3
>>> & 4] == [0 , 0] ?  the same would be true for any operator you pick.
>>
>> Which brings us back to the idea to introduce elementwise variants of
>> any operator:
>>
>> [1,2] .+ [3,4] == [1+3, 2+4]
>> [1,2] + [3,4] == [1,2,3,4]
>> [1,2] .& [3,4] == [1 & 3, 2 & 4]
>> [1,2] & [3,4] == not (yet?) defined
>>
>> As a regular numpy user, I'd be very happy about that too :-).
>
> ... except for the part where in Numpy ".+" and "+" and so one would

Parse error.

> have to be identical, which would be no end of confusing especially
> when adding, say, a Numpy array and a list.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             ?Time flies like an arrow, fruit flies like a Banana.?

From Nikolaus at rath.org  Sat Feb 14 05:13:19 2015
From: Nikolaus at rath.org (Nikolaus Rath)
Date: Fri, 13 Feb 2015 20:13:19 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <-2638738140795248968@unknownmsgid> (Chris Barker's message of
 "Fri, 13 Feb 2015 16:46:21 -0800")
References: <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
 <CACac1F_9v9tF-saVD_MHsrGfWRx6fyxxc+o2tff5HXRob5hxKQ@mail.gmail.com>
 <CAN3CYHxsSxuUX+WXkc3jtdSugEFEFm_H39mgF73ddF-U9Ktexw@mail.gmail.com>
 <20150213014805.GH2498@ando.pearwood.info>
 <CAN3CYHxm2xvNJ=SnE-9G2uX5GxQfe-Bdp3tvk_iab1SgkRj-Ng@mail.gmail.com>
 <87zj8h8kiv.fsf@thinkpad.rath.org>
 <CAOTD34b=wUx7ie7bYJE2u+WdecOyMDEjnJd1ZC+yec8VyFo6gA@mail.gmail.com>
 <-2638738140795248968@unknownmsgid>
Message-ID: <873869f7vk.fsf@vostro.rath.org>

On Feb 13 2015, Chris Barker - NOAA Federal <chris.barker-32lpuo7BZBA at public.gmane.org> wrote:
>>> As a regular numpy user, I'd be very happy about that too :-).
>
> I wouldn't -- I hated that in Matlab.

Well, I hate using Matlab, but I like the different operators :-).

> It turns out the only ace it's really useful is the two kinds of
> multiplication -- and we've now got the @ operator for that.

I don't think so. It'd e.g. free up + for array concatenation, / for
"vector" division (i.e., v / M == inverse(M) .* v, but without computing
the inverse), and ** for matrix exponentials. 

Yeah, backwards compatibility, I now. Just fantasizing about what could
have been....

Best,
Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             ?Time flies like an arrow, fruit flies like a Banana.?

From greg.ewing at canterbury.ac.nz  Sat Feb 14 05:32:50 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 14 Feb 2015 17:32:50 +1300
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150214030537.GT2498@ando.pearwood.info>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <54DD1A0B.9000202@canterbury.ac.nz> <mbkilf$mie$1@ger.gmane.org>
 <54DE76BA.6060607@canterbury.ac.nz> <20150214030537.GT2498@ando.pearwood.info>
Message-ID: <54DECFF2.6050804@canterbury.ac.nz>

Steven D'Aprano wrote:
> I think that focusing on the question of "which wins" in the case of 
> duplicate keys, the left or right operand, is not very productive. We 
> can just declare that Python semantics are that the last value seen 
> wins,

Sure we can, but that's something to be learned. It's
not going to be obvious to everyone.

-- 
Greg

From stephen at xemacs.org  Sat Feb 14 05:39:25 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Sat, 14 Feb 2015 13:39:25 +0900
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DE7A9F.6040704@btinternet.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <54DCDB8A.9030602@oddbird.net>
 <20150212184332.GG2498@ando.pearwood.info>
 <CACac1F-yTQGec_kcyyObTtbrnTyfTMQkDLC2676wQJWR1CpM3w@mail.gmail.com>
 <868B43AF-2144-47DD-A2C2-45C6920A714A@stufft.io>
 <87iof6mnjj.fsf@uwakimon.sk.tsukuba.ac.jp>
 <54DE7A9F.6040704@btinternet.com>
Message-ID: <878ug1m7ia.fsf@uwakimon.sk.tsukuba.ac.jp>

Rob Cliffe writes:

 > There are many features of the languages where decisions have had
 > to be made one way or another.  Once the decision is made, it's
 > unfair to say the semantics are muddy because they could have been
 > different.

The decision hasn't been made yet.  At this point, it *is* fair to
argue that users will be confused about the semantics of the syntax
when they encounter it, and that looking it up is a burden.  Many
times we see professional Python programmers on these lists say "I
always have to look that one up".

Whether that argument should win or not, is the question.  It has been
decided both ways in the past.  Here the burden is not so great, but
IMO the benefit is even less (to me net negative, as I would use, and
often define, appropriate methods, not the syntax).

YMMV, I just wanted to lay out the case for multiple interpretations
being confusing, as I understand it.  And missed two (Greg Ewing
suggested non-numerical conflicts, and Steven reported a suggestion of
multiple values inserted in container).


From chris.barker at noaa.gov  Sat Feb 14 05:38:20 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Fri, 13 Feb 2015 20:38:20 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DEA296.2040906@canterbury.ac.nz>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid>
 <54DEA296.2040906@canterbury.ac.nz>
Message-ID: <CALGmxEJOMGu-Pj09JvPECML4_6N4jxtEpDTpVWCLFsxSVKQg3w@mail.gmail.com>

On Fri, Feb 13, 2015 at 5:19 PM, Greg Ewing <greg.ewing at canterbury.ac.nz>
wrote:

> Chris Barker - NOAA Federal wrote:
>
>> But why does it work
>> that way? Because it needs to work on immutable objects. If it didn't,
>> then you wouldn't need the "assign back to the original name" step.
>>
>
> This doesn't make it wrong for in-place operators to
> work on immutable objects. There are two distinct use
> cases:
>
> 1) You want to update a mutable object in-place.
>

what I think this the "real" use case. ;-)

2) The LHS is a complex expression that you only want
>    to write out and evaluate once.
>

Can you give an example of this? how can put an expression in the right
hand side? I seem to only get errors:

In [23]: l1
Out[23]: [1]

In [24]: l2
Out[24]: [2]

In [25]: (l1 + l2) += [3]
  File "<ipython-input-25-b8781c271c74>", line 1
    (l1 + l2) += [3]
SyntaxError: can't assign to operator

which makes sense -- the LHS is an expression that results in a list, but
+= is trying to assign to that object. HOw can there be anything other than
a single object on the LHS?

In fact, if this worked like I expect and want it to -- that Would work.
but that fat hat we've mixed in-place operation with "shorthand for
operation plus assignment" makes this a mess.

There are ways that the tuple problem could be fixed, such

as skipping the assignment if __iadd__ returns the same
> object. But that would be a backwards-incompatible change,
> since there could be code that relies on the assignment
> always happening.


I wonder if there is are many real use cases of that -- do people really
write:

In [28]: try:
   ....:     t[0] += [4]
   ....: except TypeError:
   ....:     pass

It seems making that change would let things not raise an error that would
otherwise.

Or, I suppose, the equivalent of that try:except could be build into
augmented assignment..


> If it's in-place for a mutable object, it needs to return self. But
>> the python standard practice is that methods that mutate objects
>> shouldn't return self ( like list.sort() ) for instance.
>>
>
> The reason for that is to catch the mistake of using a
> mutating method when you meant to use a non-mutating one.
> That doesn't apply to __iadd__, because you don't
> usually call it yourself.


Sure -- but my point is that at least by convention, se keep "mutating an
object" and "creating a new object" clearly distinct -- but not in this
case.

I'm pretty convinced, band my memory of the conversation when this was
added, is that the was a case of some people wanting shorthand like:

i += 1

and others wanted a way to express in-place operations conveniently:

array += 1

And this was seen as a way to kill two birds with one stone -- and that has
let to this confusing behavior.

And BOTH are simply syntactic sugar:

i += 1 ==> i = i+1

and

array += 1 ==> np.add(array, 1, out=array)

I would argue that the seconds is a major win, and the first only a minor
win.

( and yes, I did right a bunch of ugly code that looks like the second line
before augmented assignment existed.)

But this is all mute -- the cast is well out of the bag on this.

Though if we could clean it up a bit, that would be nice.

-Chris







>
> --
> Greg
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150213/6abecdcc/attachment-0001.html>

From chris.barker at noaa.gov  Sat Feb 14 06:16:41 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Fri, 13 Feb 2015 21:16:41 -0800
Subject: [Python-ideas] Fwd:  Adding "+" and "+=" operators to dict
In-Reply-To: <CALGmxELKihPUS-LjqxZwfKubjUD78ZdYs0CJjqErpPW4E+M6eQ@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <54DD1A0B.9000202@canterbury.ac.nz> <mbkilf$mie$1@ger.gmane.org>
 <54DE76BA.6060607@canterbury.ac.nz> <20150214030537.GT2498@ando.pearwood.info>
 <54DECFF2.6050804@canterbury.ac.nz>
 <CALGmxELKihPUS-LjqxZwfKubjUD78ZdYs0CJjqErpPW4E+M6eQ@mail.gmail.com>
Message-ID: <CALGmxE+S-h2jTvu4ttUc1giV7CLgLs4xuGTWiGbWMzKMz+5tiA@mail.gmail.com>

Oops,

Accidentally went off-list -- here it is:

On Fri, Feb 13, 2015 at 8:32 PM, Greg Ewing <greg.ewing at canterbury.ac.nz>
wrote:

> Steven D'Aprano wrote:
>
>> I think that focusing on the question of "which wins" in the case of
>> duplicate keys, the left or right operand, is not very productive. We can
>> just declare that Python semantics are that the last value seen wins,
>>
>
> Sure we can, but that's something to be learned. It's
> not going to be obvious to everyone.


no -- but some things are more obvious to more people than others.

dict1 + dict2

being the same as

dict1.update(dict2)

Is pretty easy to explain, and probably the most likely to be intuitively
obvious to the most people.

It might be interesting to put a poll on a more general list:

What do you think the result of :


dict1 = {'a':1, 'b':2}
dict2 = {'c':3, 'b':4}

dict3 = dict1 + dict2

to be?

And see what shakes out with a mixture of newbies and more experienced
pythoneers think.

This is a pretty classic usability issue -- "intuitive" means "works like
one expects" , but everyone expects different things, so different things
are intuitive to different people. So often there is NO ONE thing that is
most intuitive to everyone.

But the only way to know is some usability testing.

-Chris

-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150213/39a69daf/attachment.html>

From chris.barker at noaa.gov  Sat Feb 14 06:26:00 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Fri, 13 Feb 2015 21:26:00 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DED8EA.8030604@canterbury.ac.nz>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid>
 <54DEA296.2040906@canterbury.ac.nz>
 <CALGmxEJOMGu-Pj09JvPECML4_6N4jxtEpDTpVWCLFsxSVKQg3w@mail.gmail.com>
 <54DED8EA.8030604@canterbury.ac.nz>
Message-ID: <CALGmxE+xhK+C8e=a+24Jwg2aFKP9kqjCeQvdC_G-1qKCO5zd7Q@mail.gmail.com>

On Fri, Feb 13, 2015 at 9:11 PM, Greg Ewing <greg.ewing at canterbury.ac.nz>
wrote:


>     2) The LHS is a complex expression that you only want
>>        to write out and evaluate once.
>>
>> Can you give an example of this? how can put an expression in the right
>> hand side?
>>
>
> myobj.dingle.dell[17] += 42


Hmmm -- I guess I need to look up "expression" -- I would have thought of
that as a single object -- but yes, name lookups can be expensive, so there
may be a win there. But:

In this case, this will only work if myobj.dingle.dell is a mutable object.
Even if myobj.dingle.dell[17] is imuttable. but if dell is immutable, it
won't work even if dell[17] is mutable. So we don't have the full win here.

if += was only supported on mutables, then it would be the other way
around. Not sure if there is a clear win there.

But maybe it is possible to have a kind of hot fix that allows both.

-Chris



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150213/66797fbe/attachment.html>

From rosuav at gmail.com  Sat Feb 14 06:28:36 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Sat, 14 Feb 2015 16:28:36 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CALGmxEJOMGu-Pj09JvPECML4_6N4jxtEpDTpVWCLFsxSVKQg3w@mail.gmail.com>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info>
 <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info>
 <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid>
 <54DEA296.2040906@canterbury.ac.nz>
 <CALGmxEJOMGu-Pj09JvPECML4_6N4jxtEpDTpVWCLFsxSVKQg3w@mail.gmail.com>
Message-ID: <CAPTjJmpr7fhVoNLtKYnTCj3oQQz4aKjqktzg0r0pDuuewARaXQ@mail.gmail.com>

On Sat, Feb 14, 2015 at 3:38 PM, Chris Barker <chris.barker at noaa.gov> wrote:
> In [25]: (l1 + l2) += [3]
>   File "<ipython-input-25-b8781c271c74>", line 1
>     (l1 + l2) += [3]
> SyntaxError: can't assign to operator
>
> which makes sense -- the LHS is an expression that results in a list, but +=
> is trying to assign to that object. HOw can there be anything other than a
> single object on the LHS?

You'd have to subscript it or equivalent.

>>> options = {}
>>> def get_option_mapping():
    mode = input("Pick an operational mode: ")
    if not mode: mode="default"
    if mode not in options: options[mode]=defaultdict(int)
    return options[mode]
>>> get_option_mapping()["width"] = 100
Pick an operational mode: foo
>>> options
{'foo': defaultdict(<class 'int'>, {'width': 100})}

Sure, you could assign the dict to a local name, but there's no need -
you can subscript the return value of a function, Python is not PHP.
(Though, to be fair, PHP did fix that a few years ago.) And if it
works with regular assignment, it ought to work with augmented... and
it does:

>>> get_option_mapping()["width"] += 10
Pick an operational mode: foo
>>> get_option_mapping()["width"] += 10
Pick an operational mode: bar
>>> options
{'foo': defaultdict(<class 'int'>, {'width': 110}), 'bar':
defaultdict(<class 'int'>, {'width': 10})}

The original function gets called exactly once, and then the augmented
assignment is done using the resulting dict.

ChrisA

From stephen at xemacs.org  Sat Feb 14 06:34:49 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Sat, 14 Feb 2015 14:34:49 +0900
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CALGmxELbw0BttRAPBJysVASUgHVVHLd1ydMVsiihfF2cm2kfZg@mail.gmail.com>
References: <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <CALGmxELbw0BttRAPBJysVASUgHVVHLd1ydMVsiihfF2cm2kfZg@mail.gmail.com>
Message-ID: <877fvlm4xy.fsf@uwakimon.sk.tsukuba.ac.jp>

Chris Barker writes:

 > I'm curious what the aversion is to having an operator

Speaking only for myself, I see mathematical notation as appropriate
for abstractions, and prefer to strengthen TOOWDTI from "preferably"
to "there's *only* one obvious way to do it" in evaluating proposals
to introduce such notation.  This is because mathematical syntax is
deliberately empty of semantics.  Context must determine the
interpretation, or mathematical notation is obfuscation, not
clarification.


From chris.barker at noaa.gov  Sat Feb 14 06:45:18 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Fri, 13 Feb 2015 21:45:18 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CAP7+vJK0=DgruPpx6eJGKnB4fQOhmkJUN3Mt-egRR+Z=WCA8aA@mail.gmail.com>
References: <CALGmxELKJ7Lyahk__97+ANf_A+jf_cw3d1X0vt1j9NbyEgNs0Q@mail.gmail.com>
 <CAPJVwBkiU8zbYfTwmoZL80mAZ5guJGcgziZdTOQL8JuA_3s4WA@mail.gmail.com>
 <CADiSq7eokB6tFTAt8J710NPKuvzdgu7Cg4ewg0fgZ4czDxNtpg@mail.gmail.com>
 <20150125130326.GG20517@ando.pearwood.info>
 <CALGmxEJ+GxY8vjJ-ciS00WwhcFNd1xd1A+6XTiZm9aSozjSUow@mail.gmail.com>
 <20150126063932.GI20517@ando.pearwood.info>
 <CACac1F9LRN+hRv066DxTsjhwwFYCx0tvngsO46hBRGkc-=_zug@mail.gmail.com>
 <20150127030800.GQ20517@ando.pearwood.info>
 <CAJS_LCX_KpDsfF1zaYiLaDnP9uYNZrUDysr=6Vh6PRajwR=9hg@mail.gmail.com>
 <CALGmxEJggLB4steNVPF060Ou-BV6PxhSuA_dbBHOHWivhfe=8Q@mail.gmail.com>
 <CAP7+vJKHtYkzGaWkNBEmRMgGQR=KLvTWOn14oosGVaJ5bB5+fw@mail.gmail.com>
 <CALGmxE+eeSoEqfGrfjwPdXtdciXXOTtibs6VnoXQdxAPmnBGDw@mail.gmail.com>
 <CAP7+vJK0=DgruPpx6eJGKnB4fQOhmkJUN3Mt-egRR+Z=WCA8aA@mail.gmail.com>
Message-ID: <CALGmxE+wS7r29h37SHsaoy24Wmvp9ZuGPnKhcFbfvkjGEOKmqw@mail.gmail.com>

On Fri, Feb 13, 2015 at 1:55 PM, Guido van Rossum <guido at python.org> wrote:

> I see no roadblocks but have run out of time to review the PEP one more
> time. I'm going on vacation for a week or so, maybe I'll find time, if not
> I'll start reviewing this around Feb 23.
>

Sounds good. In the meantime, I welcome type corrections, clarifying text,
etc.

-Chris




> On Fri, Feb 13, 2015 at 10:10 AM, Chris Barker <chris.barker at noaa.gov>
> wrote:
>
>> On Fri, Feb 13, 2015 at 9:35 AM, Guido van Rossum <guido at python.org>
>> wrote:
>>
>>> Is the PEP ready for pronouncement now?
>>>
>>
>> I think so -- I've addressed (one way or another) everything brought up
>> here.
>>
>> The proposed implementation needs a bit more work, but that's, well,
>> implementation. ;-)
>>
>> -Chris
>>
>>
>>
>> --
>>
>> Christopher Barker, Ph.D.
>> Oceanographer
>>
>> Emergency Response Division
>> NOAA/NOS/OR&R            (206) 526-6959   voice
>> 7600 Sand Point Way NE   (206) 526-6329   fax
>> Seattle, WA  98115       (206) 526-6317   main reception
>>
>> Chris.Barker at noaa.gov
>>
>
>
>
> --
> --Guido van Rossum (python.org/~guido)
>



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150213/9ae8d433/attachment-0001.html>

From stephen at xemacs.org  Sat Feb 14 07:09:40 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Sat, 14 Feb 2015 15:09:40 +0900
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CALGmxELUoATTLamXggBn4dk-45U5mrr83qRjKf46d5oNKv=r7Q@mail.gmail.com>
References: <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <CALGmxELbw0BttRAPBJysVASUgHVVHLd1ydMVsiihfF2cm2kfZg@mail.gmail.com>
 <20150213065635.GO2498@ando.pearwood.info>
 <CALGmxELUoATTLamXggBn4dk-45U5mrr83qRjKf46d5oNKv=r7Q@mail.gmail.com>
Message-ID: <8761b5m3bv.fsf@uwakimon.sk.tsukuba.ac.jp>

Chris Barker writes:

 > merging two dicts certainly is _something_ like addition.

But you can't say what that something is with precision, even if you
abstract, and nobody contests that different applications of dicts
have naively *different*, and in implementation incompatible,
"somethings".

So AFAICS you have to fall back on "my editor forces me to type all
the characters, so let's choose the interpretation I use most often
so cI can save some typing without suffering too much cognitive
dissonance."

Following up the off-topic comment:

 > [In numpy,] we really want readable code:
 > 
 > y = a*x**2 + b*x + c
 > 
 > really reads well, but it does create a lot of temporaries that kill
 > performance for large arrays. You can optimize that by hand by doing
 > something like:
 > 
 > y = x**2
 > y *= a
 > y += b*x
 > y += c

Compilers can optimize such things very well, too.  I would think that
a generic optimization to the compiled equivalent of

    try:
        y = x**2p
        y *= a
        y += b*x
        y += c
    except UnimplementedError:
        y = a*x**2 + b*x + c

would be easy to do, possibly controlled by a pragma (I know Guido
doesn't like those, but perhaps an extension PEP 484 "Type Hints"
could help here).


From stephen at xemacs.org  Sat Feb 14 07:16:32 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Sat, 14 Feb 2015 15:16:32 +0900
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <7262978700952906911@unknownmsgid>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info>
 <7262978700952906911@unknownmsgid>
Message-ID: <874mqpm30f.fsf@uwakimon.sk.tsukuba.ac.jp>

Chris Barker - NOAA Federal writes:

 > > avoids any confusion over operators and having += duplicating the
 > > update method.
 > 
 > += duplicates the extend method on lists.
 > 
 > And it's really redundant for numbers, too:
 > 
 > x += y
 > 
 > x = x + y
 > 
 > So plenty of precedent.

Except for the historical detail that Guido dislikes them!  (Or did.)
For a long time he resisted the extended assignment operators,
insisting that

    x += 1

is best spelled

    x = x + 1

I forget whether he ever said "if you're worried about inefficiency of
temp creation, figure out how to optimize the latter".


From donald at stufft.io  Sat Feb 14 07:17:15 2015
From: donald at stufft.io (Donald Stufft)
Date: Sat, 14 Feb 2015 01:17:15 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <8761b5m3bv.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <CALGmxELbw0BttRAPBJysVASUgHVVHLd1ydMVsiihfF2cm2kfZg@mail.gmail.com>
 <20150213065635.GO2498@ando.pearwood.info>
 <CALGmxELUoATTLamXggBn4dk-45U5mrr83qRjKf46d5oNKv=r7Q@mail.gmail.com>
 <8761b5m3bv.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <1BDF882C-0167-40B0-AAD4-DE2BBF07D2C5@stufft.io>


> On Feb 14, 2015, at 1:09 AM, Stephen J. Turnbull <stephen at xemacs.org> wrote:
> 
> Chris Barker writes:
> 
>> merging two dicts certainly is _something_ like addition.
> 
> But you can't say what that something is with precision, even if you
> abstract, and nobody contests that different applications of dicts
> have naively *different*, and in implementation incompatible,
> "somethings".
> 
> So AFAICS you have to fall back on "my editor forces me to type all
> the characters, so let's choose the interpretation I use most often
> so cI can save some typing without suffering too much cognitive
> dissonance.?
> 

Or we can choose the interpretation that has already been chosen by
multiple locations within Python. That keys are replaced and rhs
wins. This is consistent with basically every location in the stdlib
and Python core where two dicts get combined in some fashion other than
specialized subclasses.


    >>> a = {1: True, 2: True}
    >>> b = {2: False, 3: False}
    >>> dict(list(a.items()) + list(b.items()))
    {1: True, 2: False, 3: False}

    And

    >>> a = {1: True, 2: True}
    >>> b = {2: False, 3: False}
    >>> c = a.copy()
    >>> c.update(b)
    >>> c
    {1: True, 2: False, 3: False}

    And

    >>> {1: True, 2: True, 2: False, 3: False}
    >>> {1: True, 2: False, 3: False}

    And

    >>> a = {"a": True, "b": True}
    >>> b = {"b": False, "c": False}
    >>> dict(a, **b)
    {'b': False, 'c': False, 'a': True}


---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150214/c244e80e/attachment.sig>

From neatnate at gmail.com  Sat Feb 14 07:23:46 2015
From: neatnate at gmail.com (Nathan Schneider)
Date: Sat, 14 Feb 2015 01:23:46 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
Message-ID: <CADQLQrWkgOWNM8v9pX_SmyuEQBdAZbCbHXQLE==A1WHu3emamw@mail.gmail.com>

On Sat, Feb 14, 2015 at 1:09 AM, Stephen J. Turnbull <stephen at xemacs.org>
wrote:

> Chris Barker writes:
>
>  > merging two dicts certainly is _something_ like addition.
>
>
So is set union (merging two sets). But in Python, binary + is used mainly
for

  - numerical addition (and element-wise numerical addition, in Counter),
and
  - sequence concatenation.

Being unordered, dicts are more like sets than sequences, so the idea of
supporting dict1 + dict2 but not set1 + set2 makes me queasy.

Cheers,
Nathan



> But you can't say what that something is with precision, even if you
> abstract, and nobody contests that different applications of dicts
> have naively *different*, and in implementation incompatible,
> "somethings".
>
> So AFAICS you have to fall back on "my editor forces me to type all
> the characters, so let's choose the interpretation I use most often
> so cI can save some typing without suffering too much cognitive
> dissonance."
>
> Following up the off-topic comment:
>
>  > [In numpy,] we really want readable code:
>  >
>  > y = a*x**2 + b*x + c
>  >
>  > really reads well, but it does create a lot of temporaries that kill
>  > performance for large arrays. You can optimize that by hand by doing
>  > something like:
>  >
>  > y = x**2
>  > y *= a
>  > y += b*x
>  > y += c
>
> Compilers can optimize such things very well, too.  I would think that
> a generic optimization to the compiled equivalent of
>
>     try:
>         y = x**2p
>         y *= a
>         y += b*x
>         y += c
>     except UnimplementedError:
>         y = a*x**2 + b*x + c
>
> would be easy to do, possibly controlled by a pragma (I know Guido
> doesn't like those, but perhaps an extension PEP 484 "Type Hints"
> could help here).
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150214/5d0d7d79/attachment.html>

From __peter__ at web.de  Sat Feb 14 08:44:24 2015
From: __peter__ at web.de (Peter Otten)
Date: Sat, 14 Feb 2015 08:44:24 +0100
Subject: [Python-ideas] Namespace creation with syntax short form
References: <CAOtYm-Hx+9K-w3PzxxOtoQcE9OQOWvg9mYVfc3diBnAWxAkYGA@mail.gmail.com>
 <20150213104159.GQ2498@ando.pearwood.info> <mbkpa6$63l$1@ger.gmane.org>
 <D920348D-B937-4738-9FC8-39C211B1A143@yahoo.com> <mbktnl$mrl$1@ger.gmane.org>
 <0F99BA73-4206-4134-A2C2-E10A9A6B6777@yahoo.com>
Message-ID: <mbmucp$k0q$1@ger.gmane.org>

Andrew Barnert wrote:

> How does this help when you started with a 3-tuple and realize you need a
> 4th value, ala stat? Since this is your motivating example, if the
> proposal doesn't help that example, what's the point?

It doesn't help with existing return values, it offers an alternative, 
similar to keyword-only function arguments.


From lkb.teichmann at gmail.com  Sat Feb 14 10:34:48 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Sat, 14 Feb 2015 10:34:48 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class algebra)
Message-ID: <CAK9R32Sq0z+E1W5e3KK=N2UbPC2e-eVqF_kxWX7yT=YfzHYWmg@mail.gmail.com>

Hi everyone,

there seems to be a general agreement that metaclasses,
in the current state, are very problematic. As to what the
actual problem is there seems to be no agreement, but
that there IS a problem seems to be accepted.

Where do we go from here? One great idea is PEP 422.
It just replaces the idea of metaclasses with something simpler.
One point of critique is that it will take ages until people can
actually use it. In order to avoid this, I wrote a pure python
implementation of PEP 422 that works all the way down to
python 2.7. If everyone simply started using this, we could have
a rather smooth transition.

I think that implementing PEP 422 as part of the language only
makes sense if we once would be able to drop metaclasses
altogether. I thought about adding a __new_class__ to PEP 422
that would simulate the __new__ in metaclasses, thinking that
this is the only way metaclasses are used.

I was wrong. Looking at PyQt (to be precise: sip), I realized that
it uses more: it overwrites get/setattr. That's actually a good
usecase that cannot be replaced by PEP 422.
(Sadly, in the case of sip, this is technically not a good usecase:
it is apparently used to give the possibility to write mixin classes
to the right of the mixed-in class in the inheritance chain. Could
someone please convince Phil of PyQt that mixins go to the
left of other base classes?)

The upshot is: my solution sketched above simply does not work
for C extensions that use metaclasses.

This brings me to a completely different option: why don't we
implement PEP 422 in pure python, and put it in the standard
library? Then everyone writing some simple class initializer
would just use that standard metaclass, and so you can use
multiple inheritance and initializers, as all bases will use the same
standard metaclass. Someone writing more complicated
metaclass would inherit from said standard metaclass, if it
makes sense to also have class initializers. For C extensions:
is it possible as a C metaclass to inherit from a python base
class?

I added my pure python implementation of PEP 422. It is
written to be backwards compatible, so a standard library
version could be simplified. A class wanting to have initializers
should simply inherit from WithInit in my code.

Greetings

Martin

class Meta(type):
    @classmethod
    def __prepare__(cls, name, bases, namespace=None, **kwds):
        if namespace is not None:
            cls.__namespace__ = namespace
        if hasattr(cls, '__namespace__'):
            return cls.__namespace__()
        else:
            return super().__prepare__(name, bases, **kwds)

    def __new__(cls, name, bases, dict, **kwds):
        if '__init_class__' in dict:
            dict['__init_class__'] = classmethod(dict['__init_class__'])
        return super(Meta, cls).__new__(cls, name, bases, dict)

    def __init__(self, name, bases, dict, **kwds):
        self.__init_class__(**kwds)


def __init_class__(cls, **kwds):
    pass

WithInit = Meta("WithInit", (object,), dict(__init_class__=__init_class__))

# black magic: assure everyone is using the same WithInit
import sys
if hasattr(sys, "WithInit"):
    WithInit = sys.WithInit
else:
    sys.WithInit = WithInit

From stephen at xemacs.org  Sat Feb 14 10:39:39 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Sat, 14 Feb 2015 18:39:39 +0900
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <1BDF882C-0167-40B0-AAD4-DE2BBF07D2C5@stufft.io>
References: <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <CALGmxELbw0BttRAPBJysVASUgHVVHLd1ydMVsiihfF2cm2kfZg@mail.gmail.com>
 <20150213065635.GO2498@ando.pearwood.info>
 <CALGmxELUoATTLamXggBn4dk-45U5mrr83qRjKf46d5oNKv=r7Q@mail.gmail.com>
 <8761b5m3bv.fsf@uwakimon.sk.tsukuba.ac.jp>
 <1BDF882C-0167-40B0-AAD4-DE2BBF07D2C5@stufft.io>
Message-ID: <871tlsn86c.fsf@uwakimon.sk.tsukuba.ac.jp>

Donald Stufft writes:

 > Or we can choose the interpretation that has already been chosen by
 > multiple locations within Python. That keys are replaced and rhs
 > wins. This is consistent with basically every location in the stdlib
 > and Python core where two dicts get combined in some fashion other than
 > specialized subclasses.

Sure, nobody contests that we *can*.  To what benefit?  My contention
is that even the strongest advocates haven't come up with anything
stronger than "saves typing characters".

True, "dict = dict1 + dict2 + dict3" has a certain amount of visual
appeal vs. the multiline version, but you're not going to go beyond
that and write "dict = (dict1 + dict2)/2", or "dict = dict1 * dict2",
certainly not with "rightmost item wins" semantics.  So the real
competition for typing ease and readability is

    dict = merged(dict1, dict2, dict3)

and I see no advantage over the function version unless you think that
everything should be written using pseudo-algebraic operators when
possible.  And the operator form is inefficient for large dictionaries
(I admit I don't know of any applications where I'd care).

From encukou at gmail.com  Sat Feb 14 11:23:44 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Sat, 14 Feb 2015 11:23:44 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAK9R32Sq0z+E1W5e3KK=N2UbPC2e-eVqF_kxWX7yT=YfzHYWmg@mail.gmail.com>
References: <CAK9R32Sq0z+E1W5e3KK=N2UbPC2e-eVqF_kxWX7yT=YfzHYWmg@mail.gmail.com>
Message-ID: <CA+=+wqDMrXzPaGuAc0orjUO5x86SqnFh13a7b7999amQFh3q5A@mail.gmail.com>

On Sat, Feb 14, 2015 at 10:34 AM, Martin Teichmann
<lkb.teichmann at gmail.com> wrote:
> Hi everyone,
>
> there seems to be a general agreement that metaclasses,
> in the current state, are very problematic. As to what the
> actual problem is there seems to be no agreement, but
> that there IS a problem seems to be accepted.
>
> Where do we go from here? One great idea is PEP 422.
> It just replaces the idea of metaclasses with something simpler.
> One point of critique is that it will take ages until people can
> actually use it. In order to avoid this, I wrote a pure python
> implementation of PEP 422 that works all the way down to
> python 2.7. If everyone simply started using this, we could have
> a rather smooth transition.

"Everyone" here means authors of every library that uses metaclasses, right?
I think you'll have a better chance convincing Python developers.
They're a subset :)

> I think that implementing PEP 422 as part of the language only
> makes sense if we once would be able to drop metaclasses
> altogether. I thought about adding a __new_class__ to PEP 422
> that would simulate the __new__ in metaclasses, thinking that
> this is the only way metaclasses are used.

Well, if you want Python to drop metaclasses, the way starts with PEP
422. You have to introduce the alternative first, and then wait a
*really* long time until you drop a feature.
I think __new_class__ can be added after PEP 422 is in, if it turns
out to be necessary.

> I was wrong. Looking at PyQt (to be precise: sip), I realized that
> it uses more: it overwrites get/setattr. That's actually a good
> usecase that cannot be replaced by PEP 422.
> (Sadly, in the case of sip, this is technically not a good usecase:
> it is apparently used to give the possibility to write mixin classes
> to the right of the mixed-in class in the inheritance chain. Could
> someone please convince Phil of PyQt that mixins go to the
> left of other base classes?)
>
> The upshot is: my solution sketched above simply does not work
> for C extensions that use metaclasses.
>
> This brings me to a completely different option: why don't we
> implement PEP 422 in pure python, and put it in the standard
> library? Then everyone writing some simple class initializer
> would just use that standard metaclass, and so you can use
> multiple inheritance and initializers, as all bases will use the same
> standard metaclass. Someone writing more complicated
> metaclass would inherit from said standard metaclass, if it
> makes sense to also have class initializers.

Just putting it in the standard library doesn't make sense, since it
would still only be available there from Python 3.5 on (or whenever it
gets in).
It really makes more sense to put this into the *real* standard
metaclass (i.e. `type`).
The Python implementation can be on PyPI for projects needing the
backwards compatibility.

> For C extensions:
> is it possible as a C metaclass to inherit from a python base
> class?

Not really, but you can rewrite the metaclass as a C extension (maybe
after the Python variant is ironed out).

> I added my pure python implementation of PEP 422. It is
> written to be backwards compatible, so a standard library
> version could be simplified. A class wanting to have initializers
> should simply inherit from WithInit in my code.
>
> Greetings
>
> Martin
>
> class Meta(type):
>     @classmethod
>     def __prepare__(cls, name, bases, namespace=None, **kwds):
>         if namespace is not None:
>             cls.__namespace__ = namespace
>         if hasattr(cls, '__namespace__'):
>             return cls.__namespace__()
>         else:
>             return super().__prepare__(name, bases, **kwds)
>
>     def __new__(cls, name, bases, dict, **kwds):
>         if '__init_class__' in dict:
>             dict['__init_class__'] = classmethod(dict['__init_class__'])
>         return super(Meta, cls).__new__(cls, name, bases, dict)
>
>     def __init__(self, name, bases, dict, **kwds):
>         self.__init_class__(**kwds)
>
>
> def __init_class__(cls, **kwds):
>     pass
>
> WithInit = Meta("WithInit", (object,), dict(__init_class__=__init_class__))
>
> # black magic: assure everyone is using the same WithInit
> import sys
> if hasattr(sys, "WithInit"):
>     WithInit = sys.WithInit
> else:
>     sys.WithInit = WithInit

I haven't studied the PEP in detail, but I'm not sure this adheres to
it; see https://www.python.org/dev/peps/pep-0422/#calling-init-class-from-type-init
I suggest adapting the tests from http://bugs.python.org/issue17044 for this.

From lkb.teichmann at gmail.com  Sat Feb 14 12:18:03 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Sat, 14 Feb 2015 12:18:03 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CA+=+wqDMrXzPaGuAc0orjUO5x86SqnFh13a7b7999amQFh3q5A@mail.gmail.com>
References: <CAK9R32Sq0z+E1W5e3KK=N2UbPC2e-eVqF_kxWX7yT=YfzHYWmg@mail.gmail.com>
 <CA+=+wqDMrXzPaGuAc0orjUO5x86SqnFh13a7b7999amQFh3q5A@mail.gmail.com>
Message-ID: <CAK9R32TKU-ESP3Q-sMerPbd4LyEjvq-amMiCBLRnSV53OJ2juQ@mail.gmail.com>

Hi Petr, Hi all,

> Just putting it in the standard library doesn't make sense, since it
> would still only be available there from Python 3.5 on (or whenever it
> gets in).
> It really makes more sense to put this into the *real* standard
> metaclass (i.e. `type`).
> The Python implementation can be on PyPI for projects needing the
> backwards compatibility.

That was what I was thinking about. Put the python implementation
on PyPI, so everyone can use it, and finally put the thing in the
standard library.

I actually prefer the standard library over adding it to type, it's
much less a hazzle. Given that there are use cases for metaclasses
not covered by PEP 422, I guess they won't ever be dropped,
so having two complicated things in C are not such a great idea in
my opinion if we can easily write one of them in Python.

> Not really, but you can rewrite the metaclass as a C extension (maybe
> after the Python variant is ironed out).

Well said, and yes, it needs ironing out. I just realized it doesn't work
since I'm calling __init_class__ before its __class__ is set. But this
is a solvable problem: Let's rewrite PEP 422 so that the initialization
is done on subclasses, not the class itself. This is actually a very
important usecase anyways, and if you need your method to be
called on yourself, just call it after the class definition! One line of
code should not be such a big deal.

I also simplified the code, requiring now that decorates __init_subclass__
with @classmethod.

Greetings

Martin

class Meta(type):
    @classmethod
    def __prepare__(cls, name, bases, namespace=None, **kwargs):
        if namespace is not None:
            cls.__namespace__ = namespace
        if hasattr(cls, '__namespace__'):
            return cls.__namespace__()
        else:
            return super().__prepare__(name, bases, **kwargs)

    def __new__(cls, name, bases, dict, **kwargs):
        return super(Meta, cls).__new__(cls, name, bases, dict)

    def __init__(self, name, bases, dict, namespace=None, **kwargs):
        super(self, self).__init_subclass__(**kwargs)


class Base(object):
    @classmethod
    def __init_subclass__(cls, **kwargs):
        pass

SubclassInit = Meta("SubclassInit", (Base,), {})

From encukou at gmail.com  Sat Feb 14 12:47:17 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Sat, 14 Feb 2015 12:47:17 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAK9R32TKU-ESP3Q-sMerPbd4LyEjvq-amMiCBLRnSV53OJ2juQ@mail.gmail.com>
References: <CAK9R32Sq0z+E1W5e3KK=N2UbPC2e-eVqF_kxWX7yT=YfzHYWmg@mail.gmail.com>
 <CA+=+wqDMrXzPaGuAc0orjUO5x86SqnFh13a7b7999amQFh3q5A@mail.gmail.com>
 <CAK9R32TKU-ESP3Q-sMerPbd4LyEjvq-amMiCBLRnSV53OJ2juQ@mail.gmail.com>
Message-ID: <CA+=+wqAUMQEXRRt7TKjUGUSeJ9bv=jq7cCQbu-dsAR7mqUhk-A@mail.gmail.com>

On Sat, Feb 14, 2015 at 12:18 PM, Martin Teichmann
<lkb.teichmann at gmail.com> wrote:
> Hi Petr, Hi all,
>
>> Just putting it in the standard library doesn't make sense, since it
>> would still only be available there from Python 3.5 on (or whenever it
>> gets in).
>> It really makes more sense to put this into the *real* standard
>> metaclass (i.e. `type`).
>> The Python implementation can be on PyPI for projects needing the
>> backwards compatibility.
>
> That was what I was thinking about. Put the python implementation
> on PyPI, so everyone can use it, and finally put the thing in the
> standard library.
>
> I actually prefer the standard library over adding it to type, it's
> much less a hazzle. Given that there are use cases for metaclasses
> not covered by PEP 422, I guess they won't ever be dropped,
> so having two complicated things in C are not such a great idea in
> my opinion if we can easily write one of them in Python.

No. For anyone to take this seriously, it needs to be what `type` itself does.
The PyPI library then can just do "WithInit = object" on new Python versions.
But that's beside the point right now; if you can write this then it
can be integrated either way.

Also, I'm now not sure how easily you can write this in Python, since
you're already starting to take shortcuts below.

>> Not really, but you can rewrite the metaclass as a C extension (maybe
>> after the Python variant is ironed out).
>
> Well said, and yes, it needs ironing out. I just realized it doesn't work
> since I'm calling __init_class__ before its __class__ is set. But this
> is a solvable problem: Let's rewrite PEP 422 so that the initialization
> is done on subclasses, not the class itself. This is actually a very
> important usecase anyways, and if you need your method to be
> called on yourself, just call it after the class definition! One line of
> code should not be such a big deal.

-1, that would be quite surprising behavior.
If doing the right thing is really not possible in Python, then I
guess it does need to go to the C level.

> I also simplified the code, requiring now that decorates __init_subclass__
> with @classmethod.

You're writing library code. A bit more complexity here i worth it.
See again the Rejected Design Options in the PEP.

> class Meta(type):
>     @classmethod
>     def __prepare__(cls, name, bases, namespace=None, **kwargs):
>         if namespace is not None:
>             cls.__namespace__ = namespace
>         if hasattr(cls, '__namespace__'):
>             return cls.__namespace__()
>         else:
>             return super().__prepare__(name, bases, **kwargs)
>
>     def __new__(cls, name, bases, dict, **kwargs):
>         return super(Meta, cls).__new__(cls, name, bases, dict)
>
>     def __init__(self, name, bases, dict, namespace=None, **kwargs):
>         super(self, self).__init_subclass__(**kwargs)
>
>
> class Base(object):
>     @classmethod
>     def __init_subclass__(cls, **kwargs):
>         pass
>
> SubclassInit = Meta("SubclassInit", (Base,), {})

I think it's time you put this in a repository; e-mailing successive
versions is inefficient.
Also, run the tests on it -- that might uncover more problems.

From steve at pearwood.info  Sat Feb 14 13:41:03 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Sat, 14 Feb 2015 23:41:03 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DECFF2.6050804@canterbury.ac.nz>
References: <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <54DD1A0B.9000202@canterbury.ac.nz> <mbkilf$mie$1@ger.gmane.org>
 <54DE76BA.6060607@canterbury.ac.nz>
 <20150214030537.GT2498@ando.pearwood.info>
 <54DECFF2.6050804@canterbury.ac.nz>
Message-ID: <20150214124103.GW2498@ando.pearwood.info>

On Sat, Feb 14, 2015 at 05:32:50PM +1300, Greg Ewing wrote:
> Steven D'Aprano wrote:
> >I think that focusing on the question of "which wins" in the case of 
> >duplicate keys, the left or right operand, is not very productive. We 
> >can just declare that Python semantics are that the last value seen 
> >wins,
> 
> Sure we can, but that's something to be learned. It's
> not going to be obvious to everyone.

*shrug*

You mean people have to learn what programming languages do before they 
can program? Say it isn't so! *wink*

dict.update() goes back to Python 1.5 and perhaps older. That horse has 
long since bolted. Let's just proclaim the argument about *semantics* 
settled, so we can concentrate on the really important part: arguing 
about the colour of the bike-shed. I mean, syntax/interface.

There are various semantics for merging/updating a mapping with 
another mapping. We've covered them repeatedly, I'm not going to repeat 
them here. But the most commonly used one, the most general one, is what 
dict.update does, and even though it is something that needs to be 
learned, it's pretty easy to learn because it matches the left-to-right 
behaviour of other parts of Python.

So let's just agree that we're looking for a not-in-place equivalent to 
dict.update. Anyone who wants one of the less general versions can do 
what Counter does, and subclass.

Do we have consensus that copy-and-update should use the same semantics 
as dict.update?


-- 
Steven

From steve at pearwood.info  Sat Feb 14 13:44:27 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Sat, 14 Feb 2015 23:44:27 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <1BDF882C-0167-40B0-AAD4-DE2BBF07D2C5@stufft.io>
References: <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <CALGmxELbw0BttRAPBJysVASUgHVVHLd1ydMVsiihfF2cm2kfZg@mail.gmail.com>
 <20150213065635.GO2498@ando.pearwood.info>
 <CALGmxELUoATTLamXggBn4dk-45U5mrr83qRjKf46d5oNKv=r7Q@mail.gmail.com>
 <8761b5m3bv.fsf@uwakimon.sk.tsukuba.ac.jp>
 <1BDF882C-0167-40B0-AAD4-DE2BBF07D2C5@stufft.io>
Message-ID: <20150214124427.GX2498@ando.pearwood.info>

On Sat, Feb 14, 2015 at 01:17:15AM -0500, Donald Stufft wrote:

> Or we can choose the interpretation that has already been chosen by
> multiple locations within Python. That keys are replaced and rhs
> wins. This is consistent with basically every location in the stdlib
> and Python core where two dicts get combined in some fashion other than
> specialized subclasses.

I'll vote +1 on that if you promise not to use the + operator :-)


-- 
Steve

From rosuav at gmail.com  Sat Feb 14 13:49:04 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Sat, 14 Feb 2015 23:49:04 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150214124103.GW2498@ando.pearwood.info>
References: <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <54DD1A0B.9000202@canterbury.ac.nz> <mbkilf$mie$1@ger.gmane.org>
 <54DE76BA.6060607@canterbury.ac.nz>
 <20150214030537.GT2498@ando.pearwood.info>
 <54DECFF2.6050804@canterbury.ac.nz>
 <20150214124103.GW2498@ando.pearwood.info>
Message-ID: <CAPTjJmphByLih_BuZyerchCf2a=8DmzqkVzohkwKjOuwUSXXBA@mail.gmail.com>

On Sat, Feb 14, 2015 at 11:41 PM, Steven D'Aprano <steve at pearwood.info> wrote:
> Do we have consensus that copy-and-update should use the same semantics
> as dict.update?

Absolutely, from me at least.

ChrisA

From victor.stinner at gmail.com  Sat Feb 14 14:26:01 2015
From: victor.stinner at gmail.com (Victor Stinner)
Date: Sat, 14 Feb 2015 14:26:01 +0100
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CALGmxE+E0O5y4J-NqEQomt-THEVWoPcnJSXrRXJrn-sN6XQKRA@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
 <CAEbHw4bJOz1w82=Nt3ZU=3o4h-fx9FKzE+dfS4ng=N2-rNLoKQ@mail.gmail.com>
 <CALGmxE+E0O5y4J-NqEQomt-THEVWoPcnJSXrRXJrn-sN6XQKRA@mail.gmail.com>
Message-ID: <CAMpsgwbCeHW8eHA+EidNDVB9tE7EKFJESBEocy2jK2okwaR6pA@mail.gmail.com>

Hi,

Do you propose a builtin function? I would prefer to put it in the math
module.

You may add a new method to unittest.TestCase? It would show all parameters
on error. .assertTrue(isclose(...)) doesn't help on error.

Victor
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150214/83547a0d/attachment-0001.html>

From drekin at gmail.com  Sat Feb 14 15:58:30 2015
From: drekin at gmail.com (drekin at gmail.com)
Date: Sat, 14 Feb 2015 06:58:30 -0800 (PST)
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <CAK9R32RL2vP1uF+BR7XUTaaVBekMyvnED=i63XAmwjthTKf4tw@mail.gmail.com>
Message-ID: <54df6296.0969b40a.258f.08aa@mx.google.com>

Hello,

note that if A is class with metaclass MA and B a class with metaclass MB and you want Python to get combined metaclass as MA + MB when class C(A, B) is being created, the __add__ method shouldn't be defined on MA itself but on its class, i.e. on a meta meta class.

Regards, Drekin


From lkb.teichmann at gmail.com  Sat Feb 14 16:16:26 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Sat, 14 Feb 2015 16:16:26 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CA+=+wqAUMQEXRRt7TKjUGUSeJ9bv=jq7cCQbu-dsAR7mqUhk-A@mail.gmail.com>
References: <CAK9R32Sq0z+E1W5e3KK=N2UbPC2e-eVqF_kxWX7yT=YfzHYWmg@mail.gmail.com>
 <CA+=+wqDMrXzPaGuAc0orjUO5x86SqnFh13a7b7999amQFh3q5A@mail.gmail.com>
 <CAK9R32TKU-ESP3Q-sMerPbd4LyEjvq-amMiCBLRnSV53OJ2juQ@mail.gmail.com>
 <CA+=+wqAUMQEXRRt7TKjUGUSeJ9bv=jq7cCQbu-dsAR7mqUhk-A@mail.gmail.com>
Message-ID: <CAK9R32Ti49EDOBCmBk5CRQvUV0+LnpBv4wpmxi8rweyqC1VHLg@mail.gmail.com>

Hi Petr, Hi all,

>> Well said, and yes, it needs ironing out. I just realized it doesn't work
>> since I'm calling __init_class__ before its __class__ is set. But this
>> is a solvable problem: Let's rewrite PEP 422 so that the initialization
>> is done on subclasses, not the class itself. This is actually a very
>> important usecase anyways, and if you need your method to be
>> called on yourself, just call it after the class definition! One line of
>> code should not be such a big deal.
>
> -1, that would be quite surprising behavior.

Honestly, the opposite would be (actually was to me) surprising
behavior. One standard usage for PEP 422 are registry classes
that register their subclasses. From my experience I can tell you
it's just annoying having to assure not to register the registry class
itself...

It also makes a lot of sense: what should __init_class__ do?
Initializing the class? But that's what the class body is already
doing! The point is that we want to initialize the subclasses of a
class, why should we initialize the class?

>> I also simplified the code, requiring now that decorates __init_subclass__
>> with @classmethod.
>
> You're writing library code. A bit more complexity here i worth it.
> See again the Rejected Design Options in the PEP.

I'm fine with or without explicit classmethod, I can put it back in
if people prefer it. I had just realized that at least the test code
for the currently proposed C implementation still uses classmethod.

> I think it's time you put this in a repository; e-mailing successive
> versions is inefficient.

Done. It's at https://github.com/tecki/metaclasses

> Also, run the tests on it -- that might uncover more problems.

Done. For sure I had to modify the tests so that my
approach that subclasses get initialized is tested - which makes
the tests ugly, but they run.

I also took out the part where other metaclasses may block
__init_class__, that doesn't really make sense in an approach
like mine, it's much more easy to simply circumvent my metaclass.

If it is really desired it should be no problem to put it in as well.

Greetings

Martin

From chris.barker at noaa.gov  Sat Feb 14 16:59:58 2015
From: chris.barker at noaa.gov (Chris Barker - NOAA Federal)
Date: Sat, 14 Feb 2015 07:59:58 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150214124103.GW2498@ando.pearwood.info>
References: <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <54DD1A0B.9000202@canterbury.ac.nz> <mbkilf$mie$1@ger.gmane.org>
 <54DE76BA.6060607@canterbury.ac.nz> <20150214030537.GT2498@ando.pearwood.info>
 <54DECFF2.6050804@canterbury.ac.nz> <20150214124103.GW2498@ando.pearwood.info>
Message-ID: <-6938347123338197588@unknownmsgid>

> dict.update() goes back to Python 1.5 and perhaps older.

Indeed - and not only that, but update() has proved useful, and there
are no methods  for the other options proposed. Nor have they been
proposed to be added, AFAIK.

There is Counter, but as Steven points out -- it's a subclass, partly
because that behavior isn't usable in the general case of dicts with
arbitrary keys.

However, this thread started with a desire for the + and += operators
-- essentially syntactic sugar ( which I happen to like ). So is there
a real desire / use case for easy to spell merge-into-new-dict
behavior at all?

-Chris


> That horse has
> long since bolted. Let's just proclaim the argument about *semantics*
> settled, so we can concentrate on the really important part: arguing
> about the colour of the bike-shed. I mean, syntax/interface.
>
> There are various semantics for merging/updating a mapping with
> another mapping. We've covered them repeatedly, I'm not going to repeat
> them here. But the most commonly used one, the most general one, is what
> dict.update does, and even though it is something that needs to be
> learned, it's pretty easy to learn because it matches the left-to-right
> behaviour of other parts of Python.
>
> So let's just agree that we're looking for a not-in-place equivalent to
> dict.update. Anyone who wants one of the less general versions can do
> what Counter does, and subclass.
>
> Do we have consensus that copy-and-update should use the same semantics
> as dict.update?
>
>
> --
> Steven
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From chris.barker at noaa.gov  Sat Feb 14 17:10:35 2015
From: chris.barker at noaa.gov (Chris Barker - NOAA Federal)
Date: Sat, 14 Feb 2015 08:10:35 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CAMpsgwbCeHW8eHA+EidNDVB9tE7EKFJESBEocy2jK2okwaR6pA@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
 <CAEbHw4bJOz1w82=Nt3ZU=3o4h-fx9FKzE+dfS4ng=N2-rNLoKQ@mail.gmail.com>
 <CALGmxE+E0O5y4J-NqEQomt-THEVWoPcnJSXrRXJrn-sN6XQKRA@mail.gmail.com>
 <CAMpsgwbCeHW8eHA+EidNDVB9tE7EKFJESBEocy2jK2okwaR6pA@mail.gmail.com>
Message-ID: <-8276282515560390828@unknownmsgid>

On Feb 14, 2015, at 5:26 AM, Victor Stinner <victor.stinner at gmail.com>
wrote:

Do you propose a builtin function? I would prefer to put it in the math
module.

Yes, the math module is the proposed place for it.

may add a new method to unittest.TestCase? It would show all parameters on
error. .assertTrue(isclose(...)) doesn't help on error.

Good point -- I'm not much OS a unittest user -- if someone wants to a
unittest method, I think that would be fine.

I did originally have that in the PEP, but it got distracting, so Guido
suggested it be removed.

-Chris
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150214/3533f6b7/attachment.html>

From pmawhorter at gmail.com  Sat Feb 14 18:18:18 2015
From: pmawhorter at gmail.com (Peter Mawhorter)
Date: Sat, 14 Feb 2015 09:18:18 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <-6938347123338197588@unknownmsgid>
References: <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <54DD1A0B.9000202@canterbury.ac.nz> <mbkilf$mie$1@ger.gmane.org>
 <54DE76BA.6060607@canterbury.ac.nz> <20150214030537.GT2498@ando.pearwood.info>
 <54DECFF2.6050804@canterbury.ac.nz> <20150214124103.GW2498@ando.pearwood.info>
 <-6938347123338197588@unknownmsgid>
Message-ID: <CABaZ=CGkMKmqdWwpMZnBpd9DgwTfV9je5Lgg+M+HXg3gj4378Q@mail.gmail.com>

On Sat, Feb 14, 2015 at 7:59 AM, Chris Barker - NOAA Federal <
chris.barker at noaa.gov> wrote:

>
> However, this thread started with a desire for the + and += operators
> -- essentially syntactic sugar ( which I happen to like ). So is there
> a real desire / use case for easy to spell merge-into-new-dict
> behavior at all?
>
> -Chris
>

I'm pretty sure it's already been mentioned but as many people seem to be
asking for a use-case again: the primary use-case that I can see for either
an operator or a merge function is doing the operation (copy-and-merge) as
part of a larger expression, like so:


result = foo(bar(merged(d1, d2))

or

result = foo(bar(d1 + d2))


There are plenty of times when succinct code is a virtue, and the lack of a
function/operator for merging dictionaries requires an extra line of code,
despite the concept being relatively straighforward. A more complicated
exmaple provides even better motivation IMO:

result = [ merged(d1, d2) for d1 in list1 for d2 in list2 ]

In fact, the entire concept of generator expressions is motivated by the
same logic: with sufficiently clear syntax, expressing complex operations
that conform to common usage patterns and are thus easy to understand as a
single line of code (or single expression) is beneficial relative to having
to type out a for-loop every time. Generator expressions are a really great
part of Python, because they make your code more readable (it's not just
having to read fewer lines of code, but I think more to do with having to
mentally track fewer variables IMO). A "merged" function for dictionaries
would achieve a similar result, by in the examples above eliminating a
variable assignment and a line of code.

As for the color of the bike-shed...

Arguments here have convinced me that an operator is perhaps unnecessary,
so a function (either class method, builtin, or normal method) seems like
the best solution. Given that I prefer "merged" for a name, as it's more
intuitive for me than "updated", although it's sad to miss out on the
parallelism with sort/sorted. I do think something past-tense is good,
because that's a mnemonic that greatly helped me learn when/where to use
sort vs. sorted.

-Peter Mawhorter

P.S.

A concrete example use case taken from code that I've actually written
before:

def foo1(**kwargs):
  defaults = {
    "a": 1,
    "b": 2
  }
  defaults.update(kwargs)
  process(defaults)

def foo2(**kwargs):
  defaults = {
    "a": 1,
    "b": 2
  }
  process(merged(defaults, kwargs))

foo1 uses the existing syntax, while foo2 uses my favorite version of the
new syntax. I've always felt uneasy about foo1, because if you read the
code too quickly, it *looks* as though two very odd things are happening:
first, you naively see the defaults being changed and wonder if there could
be bug where the defaults are retaining updates (if you later change
defaults to a global variable, watch out!), and second, you see
process(defaults) and wonder if there's a bug where the external input is
being ignored. The alternative that clears up these code readability issues
involves an extra line of code and a whole extra variable though, so it
isn't really a good option. The new syntax here does force me to pay a
small performance penalty (I'm creating the extra variable internally) but
that's almost never going to be an issue given the likely size of the
dictionaries involved. At the same time, the new syntax is miles clearer
and it's also more concise, plus it protects you in the case that you do
move "defaults" out of the function.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150214/f3edbd21/attachment-0001.html>

From steve at pearwood.info  Sat Feb 14 18:46:28 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Sun, 15 Feb 2015 04:46:28 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <-6938347123338197588@unknownmsgid>
References: <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <54DD1A0B.9000202@canterbury.ac.nz> <mbkilf$mie$1@ger.gmane.org>
 <54DE76BA.6060607@canterbury.ac.nz>
 <20150214030537.GT2498@ando.pearwood.info>
 <54DECFF2.6050804@canterbury.ac.nz>
 <20150214124103.GW2498@ando.pearwood.info>
 <-6938347123338197588@unknownmsgid>
Message-ID: <20150214174628.GZ2498@ando.pearwood.info>

On Sat, Feb 14, 2015 at 07:59:58AM -0800, Chris Barker - NOAA Federal wrote:
> > dict.update() goes back to Python 1.5 and perhaps older.
> 
> Indeed - and not only that, but update() has proved useful, and there
> are no methods  for the other options proposed. Nor have they been
> proposed to be added, AFAIK.
> 
> There is Counter, but as Steven points out -- it's a subclass, partly
> because that behavior isn't usable in the general case of dicts with
> arbitrary keys.
> 
> However, this thread started with a desire for the + and += operators
> -- essentially syntactic sugar ( which I happen to like ). So is there
> a real desire / use case for easy to spell merge-into-new-dict
> behavior at all?

I think there is, for the same reason that there is a use-case for 
sorted() and reversed(). (Only weaker, otherwise we'd already have it.) 
Sometimes you want to update in place, and sometimes you want to make a 
copy and update. So there's definitely a use-case.

(Aside: In Ruby, there's a much stronger tradition of having two methods 
for operations, an in-place one and one which returns a new result.)

But is it a strong use-case? I personally don't think so. I suspect that 
it's more useful in theory than in practice. But other disagree.

I think that "not every three line function needs to be built-in" 
applies to this, but we have sorted() and that's turned out to be 
useful, so perhaps so long as there is *some* uses for this, and there's 
no *harm* in providing it (and somebody provides the patch) the response 
is "sure, it doesn't *need* to be a built-in, but there's not real 
downside to making it such".

(Damning it with faint praise, I know. But maybe somebody will 
demonstrate that there actually are strong uses for this.)

I've already explained that of all the options, extending the dict 
constructor (and update method) to allow multiple arguments seems like 
the best interface to me. As a second-best, a method or function would 
be okay too.

I don't think this is a good match for an operator, and I especially 
dislike the suggestion to use plus. If it had to be an operator, I 
dislike | less strongly than +.

Why |? Because dict keys are sets. (Before Python had sets, we used 
dicts with all the values set to None as a set-like data structure.) 
Updating a dict is conceptually closer to set intersection than 
concatenation:

py> a = {1:None, 2:None}
py> b = {3:None, 2:None}
py> a.update(b)
py> a
{1: None, 2: None, 3: None}
py> {1, 2} | {3, 2}
{1, 2, 3}

But:

py> [1, 2] + [3, 2]
[1, 2, 3, 2]


I'm still not *really* happy about using an operator for this -- it 
doesn't feel like "operator territory" to me -- but if it had to be an 
operator, | is the right one.


-- 
Steve

From python at 2sn.net  Sat Feb 14 18:59:41 2015
From: python at 2sn.net (Alexander Heger)
Date: Sun, 15 Feb 2015 04:59:41 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CADQLQrWkgOWNM8v9pX_SmyuEQBdAZbCbHXQLE==A1WHu3emamw@mail.gmail.com>
References: <CADQLQrWkgOWNM8v9pX_SmyuEQBdAZbCbHXQLE==A1WHu3emamw@mail.gmail.com>
Message-ID: <CAN3CYHxLE9jKi+qvuhL=sUqueYe-3eGMNh0TBo7Ecxy10n5f7g@mail.gmail.com>

On 14 February 2015 at 17:23, Nathan Schneider <neatnate at gmail.com> wrote:
>>  > merging two dicts certainly is _something_ like addition.
>>
>
> So is set union (merging two sets). But in Python, binary + is used mainly
> for
>
>   - numerical addition (and element-wise numerical addition, in Counter),
> and
>   - sequence concatenation.
>
> Being unordered, dicts are more like sets than sequences, so the idea of
> supporting dict1 + dict2 but not set1 + set2 makes me queasy.

But using the same operator as for sets has the same issue as updating
dicts is not the same as combining sets either.
It behaves unlike any other operation for which we use operators.  So,
we might quite well use +.


-Alexander

From python at 2sn.net  Sat Feb 14 19:25:37 2015
From: python at 2sn.net (Alexander Heger)
Date: Sun, 15 Feb 2015 05:25:37 +1100
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150214174628.GZ2498@ando.pearwood.info>
References: <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <54DD1A0B.9000202@canterbury.ac.nz> <mbkilf$mie$1@ger.gmane.org>
 <54DE76BA.6060607@canterbury.ac.nz>
 <20150214030537.GT2498@ando.pearwood.info>
 <54DECFF2.6050804@canterbury.ac.nz>
 <20150214124103.GW2498@ando.pearwood.info>
 <-6938347123338197588@unknownmsgid>
 <20150214174628.GZ2498@ando.pearwood.info>
Message-ID: <CAN3CYHw0rgSL7zW3QTLsYghyJ_4Hskbkq81xmVN=zDYfZH9w_Q@mail.gmail.com>

>> However, this thread started with a desire for the + and += operators
>> -- essentially syntactic sugar ( which I happen to like ). So is there
>> a real desire / use case for easy to spell merge-into-new-dict
>> behavior at all?
>
> I think there is, for the same reason that there is a use-case for
> sorted() and reversed(). (Only weaker, otherwise we'd already have it.)
> Sometimes you want to update in place, and sometimes you want to make a
> copy and update. So there's definitely a use-case.

I think the issue is not that there is a weaker use case - it's as
good, if not stronger - but the discussion has been more emotional as
it started off with "adding" dictionaries using some operator(s), +=
and +; and whereas I still like the operator syntax as shorthand,
having a "merged" function and maybe extension of the "update" method
for multiple dicts, or an "updated" function would definitively a good
start - and the question on operator or not as shorthand for these can
be fought later and separately.  For me, having a "merged" function
would definitely make a lot of code a lot more legible, as Peter
suggested.  I merge/update dictionaries a lot more often than I sort
things, it probably is somewhere between 1 in 10 to 1 in 100.

> I'm still not *really* happy about using an operator for this -- it
> doesn't feel like "operator territory" to me -- but if it had to be an
> operator, | is the right one.

There is no "right" one.  See previous post.  You say it is "more
like" - but it's not the same, either way, for "+" or for "|" on sets.
Maybe we do want people to understand it is not the same as set
combination?

-Alexander

From ethan at stoneleaf.us  Sat Feb 14 20:12:26 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Sat, 14 Feb 2015 11:12:26 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <4BE246CC-0208-4675-B783-F217B772CCF1@yahoo.com>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid> <54DEA296.2040906@canterbury.ac.nz>
 <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
 <54DEB6FC.3050405@canterbury.ac.nz>
 <4BE246CC-0208-4675-B783-F217B772CCF1@yahoo.com>
Message-ID: <54DF9E1A.1030007@stoneleaf.us>

On 02/13/2015 06:57 PM, Andrew Barnert wrote:
> On Feb 13, 2015, at 18:46, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> 
>> Andrew Barnert wrote:
>>
>>> I think it's reasonable for a target to be able to assume that it will get a
>>> setattr or setitem when one of its subobjects is assigned to. You might need
>>> to throw out cached computed properties, ...
>>
>> That's what I was thinking. But I'm not sure it would be
>> a good design,
> 
> Now I'm confused.
> 
> The current design of Python guarantees that an object always gets a setattr or setitem when one of its elements is assigned to. That's an important property, for the reasons I suggested above. So any change would have to preserve that property. And skipping assignment when __iadd__ returns self would not preserve that property. So it's not just backward-incompatible, it's bad.

--> some_var = ([1], 'abc')
--> tmp = some_var[0]
--> tmp += [2, 3]
--> some_var
([1, 2, 3], 'abc')

In that example, 'some_var' is modified without its __setitem__ ever being called.

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150214/882a76e3/attachment.sig>

From ethan at stoneleaf.us  Sat Feb 14 20:20:14 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Sat, 14 Feb 2015 11:20:14 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <877fvlm4xy.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <20150212162125.GD2498@ando.pearwood.info>
 <CAPTjJmrENK=b_3-fNQSH5VPtYqgfCwUa65d-kb=ZpjrD8W3hiA@mail.gmail.com>
 <87twyrm2zx.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAOvn4qgRm1H+CCaY=H_FsGSgMAWq1E7U_KXbAWvzi2YfF45Y1A@mail.gmail.com>
 <CALFfu7B_JnZTBhW5Ym8ZZP4hbp7JBJspN-ZOj9EMU6MmhToE8A@mail.gmail.com>
 <CALFfu7AEGkQD4J6JfKYmfkMjPBXpy68mDqwsqkOEyyWjnPqvCQ@mail.gmail.com>
 <20150213024521.GJ2498@ando.pearwood.info>
 <CALGmxELbw0BttRAPBJysVASUgHVVHLd1ydMVsiihfF2cm2kfZg@mail.gmail.com>
 <877fvlm4xy.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <54DF9FEE.3050106@stoneleaf.us>

On 02/13/2015 09:34 PM, Stephen J. Turnbull wrote:
> Chris Barker writes:
> 
>  > I'm curious what the aversion is to having an operator
> 
> Speaking only for myself, I see mathematical notation as appropriate
> for abstractions, and prefer to strengthen TOOWDTI from "preferably"
> to "there's *only* one obvious way to do it" [...]

The problem with that is that what is obvious to one may not be obvious to another.

Don't get me wrong, I don't want a hundred ways to do something, obvious or not, or even ten, but two or three should be
within the grasp of most folks, and if each of those two or three groups that found their way "obvious" counted for
80%-90% of the group as a whole, I think we have a win.

This-just-seems-obvious-to-me'ly yours,
--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150214/d4ad0924/attachment.sig>

From chris.barker at noaa.gov  Sat Feb 14 20:41:52 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Sat, 14 Feb 2015 11:41:52 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DF9E1A.1030007@stoneleaf.us>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid>
 <54DEA296.2040906@canterbury.ac.nz>
 <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
 <54DEB6FC.3050405@canterbury.ac.nz>
 <4BE246CC-0208-4675-B783-F217B772CCF1@yahoo.com>
 <54DF9E1A.1030007@stoneleaf.us>
Message-ID: <CALGmxELSg6EN2+g=Neq7uCOfSzmMfFCcuKEnMo=N9qvK0RKZBQ@mail.gmail.com>

On Sat, Feb 14, 2015 at 11:12 AM, Ethan Furman <ethan at stoneleaf.us> wrote:

> > The current design of Python guarantees that an object always gets a
> setattr or setitem when one of its elements is assigned to. That's an
> important property, for the reasons I suggested above. So any change would
> have to preserve that property. And skipping assignment when __iadd__
> returns self would not preserve that property. So it's not just
> backward-incompatible, it's bad.
>
> --> some_var = ([1], 'abc')
> --> tmp = some_var[0]
> --> tmp += [2, 3]
> --> some_var
> ([1, 2, 3], 'abc')
>
> In that example, 'some_var' is modified without its __setitem__ ever being
> called.
>

not really -- an object in the some_var is modified -- there could be any
number of other references to that object -- so this is very much how
python works.

The fact that you can't directly use augmented assignment on an object
contained in an immutable is not a bug, but it certainly is a wart --
particuarly since it will raise an Exception AFTER it has, in fact,
performed the operation requested.

I have argued that this never would have come up if augmented assignment
were only used for in-place operations, but then we couldn't used it on
integers, which was apparently desperately wanted ;-) (and the name
"augmented assignment" would be a good name, either...).

I don't know enough about how this all works under the hood to know if it
could be made to work, but it seems the intention is clear here:

object[index] += something.


is a shorthand for:

tmp = object[index]
tmp += something

or in a specific case:

In [66]: t = ([], None)

In [67]: t[0].extend([3,4])

In [68]: t
Out[68]: ([3, 4], None)


Do others agree that this, in fact, has an unambiguous intent? And that it
would be nice if it worked? OR am I missing something?

-Chris




-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150214/f2f23f6c/attachment-0001.html>

From ethan at stoneleaf.us  Sat Feb 14 21:13:02 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Sat, 14 Feb 2015 12:13:02 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CALGmxELSg6EN2+g=Neq7uCOfSzmMfFCcuKEnMo=N9qvK0RKZBQ@mail.gmail.com>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid> <54DEA296.2040906@canterbury.ac.nz>
 <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
 <54DEB6FC.3050405@canterbury.ac.nz>
 <4BE246CC-0208-4675-B783-F217B772CCF1@yahoo.com>
 <54DF9E1A.1030007@stoneleaf.us>
 <CALGmxELSg6EN2+g=Neq7uCOfSzmMfFCcuKEnMo=N9qvK0RKZBQ@mail.gmail.com>
Message-ID: <54DFAC4E.5030106@stoneleaf.us>

On 02/14/2015 11:41 AM, Chris Barker wrote:
> On Sat, Feb 14, 2015 at 11:12 AM, Ethan Furman wrote:
>> On 02/13/2015 06:57 PM, Andrew Barnert wrote:

>> The current design of Python guarantees that an object always gets a setattr
>> or setitem when one of its elements is assigned to. That's an important property,
>> for the reasons I suggested above. So any change would have to preserve that
>> property. And skipping assignment when __iadd__ returns self would not preserve
>> that property. So it's not just backward-incompatible, it's bad.
> 
> --> some_var = ([1], 'abc')
> --> tmp = some_var[0]
> --> tmp += [2, 3]
> --> some_var
> ([1, 2, 3], 'abc')
> 
>     In that example, 'some_var' is modified without its __setitem__ ever being called.
> 
> not really -- an object in the some_var is modified -- there could be any number of other references to that object --
> so this is very much how python works.

Oops, I misread what Andrew was saying, sorry!

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150214/cd91f8cf/attachment.sig>

From abarnert at yahoo.com  Sun Feb 15 01:09:01 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Sat, 14 Feb 2015 16:09:01 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <54DF9E1A.1030007@stoneleaf.us>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid> <54DEA296.2040906@canterbury.ac.nz>
 <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
 <54DEB6FC.3050405@canterbury.ac.nz>
 <4BE246CC-0208-4675-B783-F217B772CCF1@yahoo.com>
 <54DF9E1A.1030007@stoneleaf.us>
Message-ID: <63E54D77-04DE-44C5-BA50-E8658CFDBA21@yahoo.com>

On Feb 14, 2015, at 11:12, Ethan Furman <ethan at stoneleaf.us> wrote:

> On 02/13/2015 06:57 PM, Andrew Barnert wrote:
>> On Feb 13, 2015, at 18:46, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
>> 
>>> Andrew Barnert wrote:
>>> 
>>>> I think it's reasonable for a target to be able to assume that it will get a
>>>> setattr or setitem when one of its subobjects is assigned to. You might need
>>>> to throw out cached computed properties, ...
>>> 
>>> That's what I was thinking. But I'm not sure it would be
>>> a good design,
>> 
>> Now I'm confused.
>> 
>> The current design of Python guarantees that an object always gets a setattr or setitem when one of its elements is assigned to. That's an important property, for the reasons I suggested above. So any change would have to preserve that property. And skipping assignment when __iadd__ returns self would not preserve that property. So it's not just backward-incompatible, it's bad.
> 
> --> some_var = ([1], 'abc')
> --> tmp = some_var[0]
> --> tmp += [2, 3]
> --> some_var
> ([1, 2, 3], 'abc')
> 
> In that example, 'some_var' is modified without its __setitem__ ever being called.

Of course. Because one of some_var's elements is not being assigned to. There is no operation on some_var here at all; there's no difference between this code and code that uses .extend() on the element.

Assigning to an item or attribute of some_var is an operation on some_var.  

It's true that languages with _different_ assignment semantics could probably give us a way to translate everything that relies on our semantics directly, and even let us write new code that we couldn't in Python (like some_var being notified in some way). C++-style references that can overload assignment, Tcl variable tracing, Cocoa KV notifications, whatever. 

But the idea that assignment to an element is an operation on the container/namespace is the semantics that Python had used for decades;  anything you can reason through from that simple idea is always true; changing that would be bad.

From abarnert at yahoo.com  Sun Feb 15 01:10:44 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Sat, 14 Feb 2015 16:10:44 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <63E54D77-04DE-44C5-BA50-E8658CFDBA21@yahoo.com>
References: <CADQLQrUwRMdrLTJg7nEgtdDJ9wZzi4UjpM_9Uu6WxgeF-4K-8w@mail.gmail.com>
 <1586802961.2472584.1423786004303.JavaMail.yahoo@mail.yahoo.com>
 <20150213022405.GI2498@ando.pearwood.info> <7262978700952906911@unknownmsgid>
 <20150213061904.GL2498@ando.pearwood.info> <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid> <54DEA296.2040906@canterbury.ac.nz>
 <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
 <54DEB6FC.3050405@canterbury.ac.nz>
 <4BE246CC-0208-4675-B783-F217B772CCF1@yahoo.com>
 <54DF9E1A.1030007@stoneleaf.us>
 <63E54D77-04DE-44C5-BA50-E8658CFDBA21@yahoo.com>
Message-ID: <D27C06D3-56C3-4644-8BF1-7BA4816F2773@yahoo.com>

Sorry, I didn't get the following two emails until I sent this one, so this reply is now largely irrelevant.

Sent from a random iPhone

On Feb 14, 2015, at 16:09, Andrew Barnert <abarnert at yahoo.com.dmarc.invalid> wrote:

> On Feb 14, 2015, at 11:12, Ethan Furman <ethan at stoneleaf.us> wrote:
> 
>> On 02/13/2015 06:57 PM, Andrew Barnert wrote:
>>> On Feb 13, 2015, at 18:46, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
>>> 
>>>> Andrew Barnert wrote:
>>>> 
>>>>> I think it's reasonable for a target to be able to assume that it will get a
>>>>> setattr or setitem when one of its subobjects is assigned to. You might need
>>>>> to throw out cached computed properties, ...
>>>> 
>>>> That's what I was thinking. But I'm not sure it would be
>>>> a good design,
>>> 
>>> Now I'm confused.
>>> 
>>> The current design of Python guarantees that an object always gets a setattr or setitem when one of its elements is assigned to. That's an important property, for the reasons I suggested above. So any change would have to preserve that property. And skipping assignment when __iadd__ returns self would not preserve that property. So it's not just backward-incompatible, it's bad.
>> 
>> --> some_var = ([1], 'abc')
>> --> tmp = some_var[0]
>> --> tmp += [2, 3]
>> --> some_var
>> ([1, 2, 3], 'abc')
>> 
>> In that example, 'some_var' is modified without its __setitem__ ever being called.
> 
> Of course. Because one of some_var's elements is not being assigned to. There is no operation on some_var here at all; there's no difference between this code and code that uses .extend() on the element.
> 
> Assigning to an item or attribute of some_var is an operation on some_var.  
> 
> It's true that languages with _different_ assignment semantics could probably give us a way to translate everything that relies on our semantics directly, and even let us write new code that we couldn't in Python (like some_var being notified in some way). C++-style references that can overload assignment, Tcl variable tracing, Cocoa KV notifications, whatever. 
> 
> But the idea that assignment to an element is an operation on the container/namespace is the semantics that Python had used for decades;  anything you can reason through from that simple idea is always true; changing that would be bad.
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From abarnert at yahoo.com  Sun Feb 15 01:42:50 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Sat, 14 Feb 2015 16:42:50 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150214174628.GZ2498@ando.pearwood.info>
References: <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <CANc-5Uyhq5ZdNAUXFG_2W1A44Mu0XMmM2J2cYq-aYOgVa65kHw@mail.gmail.com>
 <CAPTjJmp2wub8ip5zwON+DTjomyZC4R=afYueqc_t0dRFBoC-0A@mail.gmail.com>
 <54DD1A0B.9000202@canterbury.ac.nz> <mbkilf$mie$1@ger.gmane.org>
 <54DE76BA.6060607@canterbury.ac.nz>
 <20150214030537.GT2498@ando.pearwood.info>
 <54DECFF2.6050804@canterbury.ac.nz>
 <20150214124103.GW2498@ando.pearwood.info>
 <-6938347123338197588@unknownmsgid>
 <20150214174628.GZ2498@ando.pearwood.info>
Message-ID: <5008C2B4-1E65-4A41-B8D1-AEC2FD8810F9@yahoo.com>

I'm +1 on constructor, +0.5 on a function (whether it's called updated or merged, whether it's in builtins or collections), +0.5 on both constructor and function, -0.5 on a method, and -1 on an operator.

Unless someone is seriously championing PEP 448 for 3.5, in which case I'm -0.5 on anything, because it looks like PEP 448 would already give us one obvious way to do it, and none of the alternatives are sufficiently nicer than that way to be worth having another.

I agree with everything in Steven's last post about syntax, so I won't repeat it. Additional considerations he didn't mention:

A method or operator raises the same return type issue that set methods return, as soon as you want to extend this beyond dict to other mappings. You're going to need to copy the _from_iterable classmethod documented only in the constructor idea, which is ugly--and it won't work as well in Mapping as in Set, because there are a ton of non-compliant mappings already in the wild, and it's not even clear how some of them _should_ work.

A constructor not only makes the type even more obvious and explicit, it also answers the implementation problem: the only code that has to know about the constructor's signature is the constructor itself, which isn't a problem. Even for weird cases like defaultdict.

A function also avoids the problem, if not quite as nicely. If sorted and reversed return whatever type they want, not the type you pass in, so why should updated be any different? If you really need it to be a MyMapping, you can do MyMapping(updated(a, b)), with the same verbosity and performance cost as when you have to do MySequence(sorted(t)).

A function has the added advantage that it can be backported trivially for 3.4 and 2.7 code.

In fact, because the constructor is more general and powerful, but the function is more backportable, I don't see much problem adding both.

On Feb 14, 2015, at 9:46, Steven D'Aprano <steve at pearwood.info> wrote:

> I've already explained that of all the options, extending the dict 
> constructor (and update method) to allow multiple arguments seems like 
> the best interface to me. As a second-best, a method or function would 
> be okay too.


From steve at pearwood.info  Sun Feb 15 02:30:49 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Sun, 15 Feb 2015 12:30:49 +1100
Subject: [Python-ideas] Augmented assignment [was Re: Adding "+" and "+="
	operators to dict]
In-Reply-To: <CALGmxELSg6EN2+g=Neq7uCOfSzmMfFCcuKEnMo=N9qvK0RKZBQ@mail.gmail.com>
References: <20150213061904.GL2498@ando.pearwood.info>
 <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid> <54DEA296.2040906@canterbury.ac.nz>
 <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
 <54DEB6FC.3050405@canterbury.ac.nz>
 <4BE246CC-0208-4675-B783-F217B772CCF1@yahoo.com>
 <54DF9E1A.1030007@stoneleaf.us>
 <CALGmxELSg6EN2+g=Neq7uCOfSzmMfFCcuKEnMo=N9qvK0RKZBQ@mail.gmail.com>
Message-ID: <20150215013049.GB2498@ando.pearwood.info>

This sub-thread has long since drifted away from dicts, so I've changed 
the subject to make that clear.

On Sat, Feb 14, 2015 at 11:41:52AM -0800, Chris Barker wrote:

> The fact that you can't directly use augmented assignment on an object
> contained in an immutable is not a bug, but it certainly is a wart --
> particuarly since it will raise an Exception AFTER it has, in fact,
> performed the operation requested.

Yes, but you can use augmented assignment on an object contained in an 
immutable under some circumstances. Here are three examples 
demonstrating outright failure, weird super-position of 
failed-but-succeeded, and success.


py> t = (1, )
py> t[0] += 1
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'tuple' object does not support item assignment
py> t
(1,)

py> t = ([1], )
py> t[0] += [1]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'tuple' object does not support item assignment
py> t
([1, 1],)

py> t[0][0] += 1
py> t
([2, 1],)


> I have argued that this never would have come up if augmented assignment
> were only used for in-place operations, 


And it would never happen if augmented assignment *never* was used for 
in-place operations. If it always required an assignment, then if the 
assignment failed, the object in question would be unchanged.

Alas, there's no way to enforce the rule that __iadd__ doesn't modify 
objects in place, and it actually is a nice optimization when they can 
do so.

[...]
> I don't know enough about how this all works under the hood to know if it
> could be made to work, but it seems the intention is clear here:
> 
> object[index] += something.
> 
> 
> is a shorthand for:
> 
> tmp = object[index]
> tmp += something

No, that doesn't work. If tmp is *immutable*, then object[index] never 
gets updated.

The intention as I understand it is that:

reference += expr

should be syntactic sugar (possibly optimized) for:

reference = reference + expr

where "reference" means (for example) bare names, item references, key 
references, and chaining the same:

n
n[0]
n[0]['spam']
n[0]['spam'].eggs

etc.

I wonder if we can make this work more clearly if augmented assignments 
checked whether the same object is returned and skipped the assignment 
in that case?


-- 
Steve

From abarnert at yahoo.com  Sun Feb 15 04:10:19 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Sat, 14 Feb 2015 19:10:19 -0800
Subject: [Python-ideas] Augmented assignment [was Re: Adding "+" and
	"+=" operators to dict]
In-Reply-To: <20150215013049.GB2498@ando.pearwood.info>
References: <20150213061904.GL2498@ando.pearwood.info>
 <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid> <54DEA296.2040906@canterbury.ac.nz>
 <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
 <54DEB6FC.3050405@canterbury.ac.nz>
 <4BE246CC-0208-4675-B783-F217B772CCF1@yahoo.com>
 <54DF9E1A.1030007@stoneleaf.us>
 <CALGmxELSg6EN2+g=Neq7uCOfSzmMfFCcuKEnMo=N9qvK0RKZBQ@mail.gmail.com>
 <20150215013049.GB2498@ando.pearwood.info>
Message-ID: <987C44CA-736D-41EE-BF64-1B60528E297E@yahoo.com>

On Feb 14, 2015, at 17:30, Steven D'Aprano <steve at pearwood.info> wrote:

> This sub-thread has long since drifted away from dicts, so I've changed 
> the subject to make that clear.
> 
> On Sat, Feb 14, 2015 at 11:41:52AM -0800, Chris Barker wrote:
> 
>> The fact that you can't directly use augmented assignment on an object
>> contained in an immutable is not a bug, but it certainly is a wart --
>> particuarly since it will raise an Exception AFTER it has, in fact,
>> performed the operation requested.
> 
> Yes, but you can use augmented assignment on an object contained in an 
> immutable under some circumstances. Here are three examples 
> demonstrating outright failure, weird super-position of 
> failed-but-succeeded, and success.
> 
> 
> py> t = (1, )
> py> t[0] += 1
> Traceback (most recent call last):
>  File "<stdin>", line 1, in <module>
> TypeError: 'tuple' object does not support item assignment
> py> t
> (1,)
> 
> py> t = ([1], )
> py> t[0] += [1]
> Traceback (most recent call last):
>  File "<stdin>", line 1, in <module>
> TypeError: 'tuple' object does not support item assignment
> py> t
> ([1, 1],)
> 
> py> t[0][0] += 1
> py> t
> ([2, 1],)
> 
> 
>> I have argued that this never would have come up if augmented assignment
>> were only used for in-place operations,
> 
> 
> And it would never happen if augmented assignment *never* was used for 
> in-place operations. If it always required an assignment, then if the 
> assignment failed, the object in question would be unchanged.
> 
> Alas, there's no way to enforce the rule that __iadd__ doesn't modify 
> objects in place, and it actually is a nice optimization when they can 
> do so.

No, it's not just a nice optimization, it's an important part of the semantics. The whole point of being able to share mutable objects is being able to mutate them in a way that all the sharers see.

If __iadd__ didn't modify objects in-place, you'd have this:

    py> a, b = [], []
    py> c = [a, b]
    py> c[1] += [1]
    py> b
    []

Of course this would make perfect sense if Python were a pure immutable language, or a lower-level language like C++ where c = [a, b] made copies of a and b rather than creating new names referring to the same lists, or if Python had made the decision long ago that mutation is always spelled with an explicit method call rather than operator-like syntax. And any of those three alternatives could be part of a good language. But that language would be substantially different from Python.

>> I don't know enough about how this all works under the hood to know if it
>> could be made to work, but it seems the intention is clear here:
>> 
>> object[index] += something.
>> 
>> 
>> is a shorthand for:
>> 
>> tmp = object[index]
>> tmp += something
> 
> No, that doesn't work. If tmp is *immutable*, then object[index] never 
> gets updated.
> 
> The intention as I understand it is that:
> 
> reference += expr
> 
> should be syntactic sugar (possibly optimized) for:
> 
> reference = reference + expr
> 
> where "reference" means (for example) bare names, item references, key 
> references, and chaining the same:
> 
> n
> n[0]
> n[0]['spam']
> n[0]['spam'].eggs
> 
> etc.
> 
> I wonder if we can make this work more clearly if augmented assignments 
> checked whether the same object is returned and skipped the assignment 
> in that case?

I already answered that earlier. There are plenty of objects that necessarily rely on the assumption that item/attr assignment always means __setitem__/__setattr__.

Consider any object with a cache that has to be invalidated when one of its members or attributes is set. Or a sorted list that may have to move an element if it's replaced. Or really almost any case where a custom __setattr__, an @property or custom descriptor, or a non-trivial __setitem__ is useful. All of there would break.

What you're essentially proposing is that augmented assignment is no longer really assignment, so classes that want to manage assignment in some way can't manage augmented assignment.

From steve at pearwood.info  Sun Feb 15 06:40:15 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Sun, 15 Feb 2015 16:40:15 +1100
Subject: [Python-ideas] Augmented assignment [was Re: Adding "+" and
	"+=" operators to dict]
In-Reply-To: <987C44CA-736D-41EE-BF64-1B60528E297E@yahoo.com>
References: <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid> <54DEA296.2040906@canterbury.ac.nz>
 <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
 <54DEB6FC.3050405@canterbury.ac.nz>
 <4BE246CC-0208-4675-B783-F217B772CCF1@yahoo.com>
 <54DF9E1A.1030007@stoneleaf.us>
 <CALGmxELSg6EN2+g=Neq7uCOfSzmMfFCcuKEnMo=N9qvK0RKZBQ@mail.gmail.com>
 <20150215013049.GB2498@ando.pearwood.info>
 <987C44CA-736D-41EE-BF64-1B60528E297E@yahoo.com>
Message-ID: <20150215054015.GF2498@ando.pearwood.info>

On Sat, Feb 14, 2015 at 07:10:19PM -0800, Andrew Barnert wrote:
> On Feb 14, 2015, at 17:30, Steven D'Aprano <steve at pearwood.info> wrote:
[snip example of tuple augmented assignment which both succeeds and 
fails at the same time]

> >> I have argued that this never would have come up if augmented assignment
> >> were only used for in-place operations,
> > 
> > And it would never happen if augmented assignment *never* was used for 
> > in-place operations. If it always required an assignment, then if the 
> > assignment failed, the object in question would be unchanged.
> > 
> > Alas, there's no way to enforce the rule that __iadd__ doesn't modify 
> > objects in place, and it actually is a nice optimization when they can 
> > do so.
> 
> No, it's not just a nice optimization, it's an important part of the 
> semantics. The whole point of being able to share mutable objects is 
> being able to mutate them in a way that all the sharers see.

Sure. I didn't say that it was "just" (i.e. only) a nice optimization. 
Augmented assignment methods like __iadd__ are not only permitted but 
encouraged to perform changes in place.

As you go on to explain, those semantics are language design choice. Had 
the Python developers made different choices, then Python would 
naturally be different. But given the way Python works, you cannot 
enforce any such "no side-effects" rule for mutable objects.


> If __iadd__ didn't modify objects in-place, you'd have this:
> 
>     py> a, b = [], []
>     py> c = [a, b]
>     py> c[1] += [1]
>     py> b
>     []

Correct. And that's what happens if you write c[1] = c[1] + [1]. If you 
wanted to modify the object in place, you could write c[1].extend([1]) 
or c[1].append(1).

The original PEP for this feature:

http://legacy.python.org/dev/peps/pep-0203/

lists two rationales:

"simplicity of expression, and support for in-place operations". It 
doesn't go into much detail for the reason *why* in-place operations are 
desirable, but the one example given (numpy arrays) talks about avoiding 
needing to create a new object, possibly running out of memory, and then 
"possibly delete the original object, depending on reference count". To 
me, saving memory sounds like an optimization :-)

But of course you are correct Python would be very different indeed if 
augmented assignment didn't allow mutation in-place.


[...]
> > I wonder if we can make this work more clearly if augmented assignments 
> > checked whether the same object is returned and skipped the assignment 
> > in that case?
> 
> I already answered that earlier. There are plenty of objects that 
> necessarily rely on the assumption that item/attr assignment always 
> means __setitem__/__setattr__.
> 
> Consider any object with a cache that has to be invalidated when one 
> of its members or attributes is set. Or a sorted list that may have to 
> move an element if it's replaced. Or really almost any case where a 
> custom __setattr__, an @property or custom descriptor, or a 
> non-trivial __setitem__ is useful. All of there would break.

Are there many such objects where replacing a member with itself is a 
meaningful change?

E.g. would the average developer reasonably expect that as a deliberate 
design feature, spam = spam should be *guaranteed* to not be a no-op? I 
know that for Python today, it may not be a no-op, if spam is an 
expression such as foo.bar or foo[bar], and I'm not suggesting that it 
would be reasonable to change the compiler semantics so that "normal" = 
binding should skip the assignment when both sides refer to the same 
object.

But I wonder whether *augmented assignment* should do so. I don't do 
this lightly, but only to fix a wart in the language. See below.


> What you're essentially proposing is that augmented assignment is no 
> longer really assignment, so classes that want to manage assignment in 
> some way can't manage augmented assignment.

I am suggesting that perhaps we should rethink the idea that augmented 
assignment is *unconditionally* a form of assignment. We're doing this 
because the current behaviour breaks under certain circumstances. If it 
simply raised an exception, that would be okay, but the fact that the 
operation succeeds and yet still raises an exception, that's pretty bad.

In the case of mutable objects inside immutable ones, we know the 
augmented operation actually doesn't require there to be an assignment, 
because the mutation succeeds even though the assignment fails. Here's 
an example again, for anyone skimming the thread or has gotten lost:

t = ([1, 2], None)
t[0] += [1]


The PEP says:

    The __iadd__ hook should behave similar to __add__,
    returning the result of the operation (which could be `self')
    which is to be assigned to the variable `x'.

I'm suggesting that if __iadd__ returns self, the assignment be skipped. 
That would solve the tuple case above. You're saying it might break code 
that relies on some setter such as __setitem__ or __setattr__ being 
called. E.g.

myobj.spam = []  # spam is a property
myobj.spam += [1]  # myobj expects this to call spam's setter

But that's already broken, because the caller can trivially bypass the 
setter for any other in-place mutation:

myobj.spam.append(1)
myobj.spam[3:7] = [1, 2, 3, 4, 5]
del myobj.spam[2]
myobj.spam.sort()

etc. In other words, in the "cache invalidation" case (etc.), no real 
class that directly exposes a mutable object to the outside world can 
rely on a setter being called. It would have to wrap it in a proxy to 
intercept mutator methods, or live with the fact that it won't be 
notified of mutations.

I used to think that Python had no choice but to perform an 
unconditional assignment, because it couldn't tell whether the operation 
was a mutation or not. But I think I was wrong. If the result of 
__iadd__ is self, then either the operation was a mutation, or the 
assignment is "effectively" a no-op. (That is, the result of the op 
hasn't changed anything.)

I say "effectively" a no-op in scare quotes because setters will 
currently be called in this situation:

myobj.spam = "a"
myobj.spam += ""  # due to interning, "a" + "" may be the same object

Currently that will call spam's setter, and the argument will be the 
identical object as spam's current value. It may be that the setter is 
rather naive, and it doesn't bother to check whether the new value is 
actually different from the old value before performing its cache 
invalidation (or whatever). So you are right that this change will 
affect some code.

That doesn't mean we can't fix this. It just means we have to go through 
a transition period, like for any other change to Python's semantics. 
During the transition, you may need to import from __future__, or there 
may be a warning, or both. After the transition, writing:

myobj.spam = myobj.spam

will still call the spam setter, always. But augmented assignment may 
not, if __iadd__ returns the same object. (Not just an object with the 
same value, it has to be the actual same object.)

I think that's a reasonable change to make, to remove this nasty gotcha 
from the language.

For anyone relying on their cache being invalidated when it is touched, 
even if the touch otherwise makes no difference, they just have to deal 
with a slight change in the definition of "touched". Augmented 
assignment won't work. Instead of using cache.time_to_live += 0 to cause 
an invalidation, use cache.time_to_live = cache.time_to_live. Or better 
still, provide an explicit cache.invalidate() method.


-- 
Steve

From greg.ewing at canterbury.ac.nz  Sun Feb 15 07:11:25 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sun, 15 Feb 2015 19:11:25 +1300
Subject: [Python-ideas] Augmented assignment [was Re: Adding "+"
 and	"+=" operators to dict]
In-Reply-To: <987C44CA-736D-41EE-BF64-1B60528E297E@yahoo.com>
References: <20150213061904.GL2498@ando.pearwood.info>
 <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid> <54DEA296.2040906@canterbury.ac.nz>
 <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
 <54DEB6FC.3050405@canterbury.ac.nz>
 <4BE246CC-0208-4675-B783-F217B772CCF1@yahoo.com>
 <54DF9E1A.1030007@stoneleaf.us>
 <CALGmxELSg6EN2+g=Neq7uCOfSzmMfFCcuKEnMo=N9qvK0RKZBQ@mail.gmail.com>
 <20150215013049.GB2498@ando.pearwood.info>
 <987C44CA-736D-41EE-BF64-1B60528E297E@yahoo.com>
Message-ID: <54E0388D.1000508@canterbury.ac.nz>

Andrew Barnert wrote:
> Consider any object with a cache that has to be invalidated when one of its
> members or attributes is set.

If obj were such an object, then obj.member.extend(foo)
would fail to notify it of a change.

Also, if obj1 and obj2 are sharing the same list, then
obj1.member += foo would only notify obj1 of the change
and not obj2.

For these reasons, I wonder whether relying on a side
effect of += to notify an object of changes is a good
design.

-- 
Greg

From abarnert at yahoo.com  Sun Feb 15 07:12:30 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Sat, 14 Feb 2015 22:12:30 -0800
Subject: [Python-ideas] Augmented assignment [was Re: Adding "+" and
	"+=" operators to dict]
In-Reply-To: <20150215054015.GF2498@ando.pearwood.info>
References: <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid> <54DEA296.2040906@canterbury.ac.nz>
 <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
 <54DEB6FC.3050405@canterbury.ac.nz>
 <4BE246CC-0208-4675-B783-F217B772CCF1@yahoo.com>
 <54DF9E1A.1030007@stoneleaf.us>
 <CALGmxELSg6EN2+g=Neq7uCOfSzmMfFCcuKEnMo=N9qvK0RKZBQ@mail.gmail.com>
 <20150215013049.GB2498@ando.pearwood.info>
 <987C44CA-736D-41EE-BF64-1B60528E297E@yahoo.com>
 <20150215054015.GF2498@ando.pearwood.info>
Message-ID: <F9464AC4-8BCA-45CB-8B13-C519A718D666@yahoo.com>

On Feb 14, 2015, at 21:40, Steven D'Aprano <steve at pearwood.info> wrote:

> On Sat, Feb 14, 2015 at 07:10:19PM -0800, Andrew Barnert wrote:
>> On Feb 14, 2015, at 17:30, Steven D'Aprano <steve at pearwood.info> wrote:
> [snip example of tuple augmented assignment which both succeeds and 
> fails at the same time]
> 
>>>> I have argued that this never would have come up if augmented assignment
>>>> were only used for in-place operations,
>>> 
>>> And it would never happen if augmented assignment *never* was used for 
>>> in-place operations. If it always required an assignment, then if the 
>>> assignment failed, the object in question would be unchanged.
>>> 
>>> Alas, there's no way to enforce the rule that __iadd__ doesn't modify 
>>> objects in place, and it actually is a nice optimization when they can 
>>> do so.
>> 
>> No, it's not just a nice optimization, it's an important part of the 
>> semantics. The whole point of being able to share mutable objects is 
>> being able to mutate them in a way that all the sharers see.
> 
> Sure. I didn't say that it was "just" (i.e. only) a nice optimization. 
> Augmented assignment methods like __iadd__ are not only permitted but 
> encouraged to perform changes in place.
> 
> As you go on to explain, those semantics are language design choice. Had 
> the Python developers made different choices, then Python would 
> naturally be different. But given the way Python works, you cannot 
> enforce any such "no side-effects" rule for mutable objects.
> 
> 
>> If __iadd__ didn't modify objects in-place, you'd have this:
>> 
>>    py> a, b = [], []
>>    py> c = [a, b]
>>    py> c[1] += [1]
>>    py> b
>>    []
> 
> Correct. And that's what happens if you write c[1] = c[1] + [1]. If you 
> wanted to modify the object in place, you could write c[1].extend([1]) 
> or c[1].append(1).
> 
> The original PEP for this feature:
> 
> http://legacy.python.org/dev/peps/pep-0203/
> 
> lists two rationales:
> 
> "simplicity of expression, and support for in-place operations". It 
> doesn't go into much detail for the reason *why* in-place operations are 
> desirable, but the one example given (numpy arrays) talks about avoiding 
> needing to create a new object, possibly running out of memory, and then 
> "possibly delete the original object, depending on reference count". To 
> me, saving memory sounds like an optimization :-)
> 
> But of course you are correct Python would be very different indeed if 
> augmented assignment didn't allow mutation in-place.
> 
> 
> [...]
>>> I wonder if we can make this work more clearly if augmented assignments 
>>> checked whether the same object is returned and skipped the assignment 
>>> in that case?
>> 
>> I already answered that earlier. There are plenty of objects that 
>> necessarily rely on the assumption that item/attr assignment always 
>> means __setitem__/__setattr__.
>> 
>> Consider any object with a cache that has to be invalidated when one 
>> of its members or attributes is set. Or a sorted list that may have to 
>> move an element if it's replaced. Or really almost any case where a 
>> custom __setattr__, an @property or custom descriptor, or a 
>> non-trivial __setitem__ is useful. All of there would break.
> 
> Are there many such objects where replacing a member with itself is a 
> meaningful change?

Where replacing a member with a mutated version of the member is meaningful, sure.

Consider a sorted list again. Currently, it's perfectly valid to do sl[0] += x, because sl gets a chance to move the element, while it's not valid (according to the contract of sortedlist) to call sl[0].extend(x). This is relatively easy to understand and remember, even if it may seem odd to someone who doesn't understand Python assignment under the covers. (I could dig up a few StackOverflow questions where something like "use sl[0] += x" is the accepted answer--although sadly few if any of them explain _why_ there's a difference...)

Obviously if we changed += to be the same as update, people could find _different_ answers. For example, you can always write "tmp = sl[0]; tmp.extend(x); sl[0] = tmp" to explicitly reintroduce assignment where we've taken it away. But would you suggest that's an improvement?

You could (and do) argue that any class that relies on assignment to maintain sorting, flush caches, update proxies, whatever is a badly-designed class, and this feature has turned out to be an attractive nuisance. (To be honest, the fact that a dict can effectively guarantee that its keys won't change by requiring hashabllity, whole a sorted container has to document "please don't change the keys", is itself a wart on the language--but not one I'd suggest fixing.) But this really is a more significant change than you're making out.

> E.g. would the average developer reasonably expect that as a deliberate 
> design feature, spam = spam should be *guaranteed* to not be a no-op? I 
> know that for Python today, it may not be a no-op, if spam is an 
> expression such as foo.bar or foo[bar], and I'm not suggesting that it 
> would be reasonable to change the compiler semantics so that "normal" = 
> binding should skip the assignment when both sides refer to the same 
> object.
> 
> But I wonder whether *augmented assignment* should do so. I don't do 
> this lightly, but only to fix a wart in the language. See below.

This is a wart that's been there since the feature was added in the early 2.x days, and most people don't even run into it until they've been using Python for years. (Look at how many experienced Python devs in the previous thread were surprised, because they'd never run into it.) And the workaround is generally trivial: don't use assignment (in the FAQ case, and yours, just call extend instead). And it's easy to understand once you think about the semantics.

Creating a much larger, and harder-to-work-around, problem in an unknown number of libraries (and apps that use those libraries) to fix a small wart like this doesn't seem like a good trade.

And of course the _real_ answer in almost all cases is even simpler than the workaround: don't try to assign to objects inside immutable containers. Most people already know not to do this in the first place, which is why they rarely if ever run into the case where it half-works--and when they do, either the += was a mistake, or using a tuple instead of a list was a mistake, and the exception is sufficient to show them their mistake. 

I suppose it's _possible_ someone has written some code that dealt with the exception and then proceeded to work on bad data, but I don't think it's very likely. And that's really the only code that's affected by the wart.

>> What you're essentially proposing is that augmented assignment is no 
>> longer really assignment, so classes that want to manage assignment in 
>> some way can't manage augmented assignment.
> 
> I am suggesting that perhaps we should rethink the idea that augmented 
> assignment is *unconditionally* a form of assignment. We're doing this 
> because the current behaviour breaks under certain circumstances. If it 
> simply raised an exception, that would be okay, but the fact that the 
> operation succeeds and yet still raises an exception, that's pretty bad.
> 
> In the case of mutable objects inside immutable ones, we know the 
> augmented operation actually doesn't require there to be an assignment, 
> because the mutation succeeds even though the assignment fails. Here's 
> an example again, for anyone skimming the thread or has gotten lost:
> 
> t = ([1, 2], None)
> t[0] += [1]
> 
> 
> The PEP says:
> 
>    The __iadd__ hook should behave similar to __add__,
>    returning the result of the operation (which could be `self')
>    which is to be assigned to the variable `x'.
> 
> I'm suggesting that if __iadd__ returns self, the assignment be skipped. 
> That would solve the tuple case above. You're saying it might break code 
> that relies on some setter such as __setitem__ or __setattr__ being 
> called. E.g.
> 
> myobj.spam = []  # spam is a property
> myobj.spam += [1]  # myobj expects this to call spam's setter
> 
> But that's already broken, because the caller can trivially bypass the 
> setter for any other in-place mutation:
> 
> myobj.spam.append(1)
> myobj.spam[3:7] = [1, 2, 3, 4, 5]
> del myobj.spam[2]
> myobj.spam.sort()

The fact that somebody can work around your contract doesn't mean that your design is broken. I can usually ttrivially bypass your read-only x property by setting _x instead; so what? And again, I can break just about every mutable sorted container ever written just by mutating the keys; that doesn't mean sorted containers are useless.

> etc. In other words, in the "cache invalidation" case (etc.), no real 
> class that directly exposes a mutable object to the outside world can 
> rely on a setter being called. It would have to wrap it in a proxy to 
> intercept mutator methods, or live with the fact that it won't be 
> notified of mutations.
> 
> I used to think that Python had no choice but to perform an 
> unconditional assignment, because it couldn't tell whether the operation 
> was a mutation or not. But I think I was wrong. If the result of 
> __iadd__ is self, then either the operation was a mutation, or the 
> assignment is "effectively" a no-op. (That is, the result of the op 
> hasn't changed anything.)
> 
> I say "effectively" a no-op in scare quotes because setters will 
> currently be called in this situation:
> 
> myobj.spam = "a"
> myobj.spam += ""  # due to interning, "a" + "" may be the same object
> 
> Currently that will call spam's setter, and the argument will be the 
> identical object as spam's current value. It may be that the setter is 
> rather naive, and it doesn't bother to check whether the new value is 
> actually different from the old value before performing its cache 
> invalidation (or whatever). So you are right that this change will 
> affect some code.
> 
> That doesn't mean we can't fix this. It just means we have to go through 
> a transition period, like for any other change to Python's semantics. 
> During the transition, you may need to import from __future__, or there 
> may be a warning, or both. After the transition, writing:
> 
> myobj.spam = myobj.spam
> 
> will still call the spam setter, always. But augmented assignment may 
> not, if __iadd__ returns the same object. (Not just an object with the 
> same value, it has to be the actual same object.)
> 
> I think that's a reasonable change to make, to remove this nasty gotcha 
> from the language.
> 
> For anyone relying on their cache being invalidated when it is touched, 
> even if the touch otherwise makes no difference, they just have to deal 
> with a slight change in the definition of "touched". Augmented 
> assignment won't work. Instead of using cache.time_to_live += 0 to cause 
> an invalidation, use cache.time_to_live = cache.time_to_live. Or better 
> still, provide an explicit cache.invalidate() method.

I didn't think of some of these cases. But that just shows that there are _more_ cases that would be broken by the change, which means it's _more_ costly. The fact that these additional cases are easier to work around than the ones I gave doesn't change that. 

From abarnert at yahoo.com  Sun Feb 15 07:19:49 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Sat, 14 Feb 2015 22:19:49 -0800
Subject: [Python-ideas] Augmented assignment [was Re: Adding "+"
	and	"+=" operators to dict]
In-Reply-To: <54E0388D.1000508@canterbury.ac.nz>
References: <20150213061904.GL2498@ando.pearwood.info>
 <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid> <54DEA296.2040906@canterbury.ac.nz>
 <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
 <54DEB6FC.3050405@canterbury.ac.nz>
 <4BE246CC-0208-4675-B783-F217B772CCF1@yahoo.com>
 <54DF9E1A.1030007@stoneleaf.us>
 <CALGmxELSg6EN2+g=Neq7uCOfSzmMfFCcuKEnMo=N9qvK0RKZBQ@mail.gmail.com>
 <20150215013049.GB2498@ando.pearwood.info>
 <987C44CA-736D-41EE-BF64-1B60528E297E@yahoo.com>
 <54E0388D.1000508@canterbury.ac.nz>
Message-ID: <D4ADE53C-1128-4C98-BF85-D99BD611A31B@yahoo.com>

On Feb 14, 2015, at 22:11, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:

> Andrew Barnert wrote:
>> Consider any object with a cache that has to be invalidated when one of its
>> members or attributes is set.
> 
> If obj were such an object, then obj.member.extend(foo)
> would fail to notify it of a change.

Sure. Again, this is why every sorted container, caching object, etc. out there has to document "please don't mutate the keys/elements/members" and hope people read the docs. But you're always allowed to _assign_ to the elements. And, because of the semantics of Python, that includes augmented assignment. And people have used that fact in real code.

> Also, if obj1 and obj2 are sharing the same list, then
> obj1.member += foo would only notify obj1 of the change
> and not obj2.

Sure, and this subtlety does--rarely, but occasionally--come up in real code, and then you have to think about what to do about it.

Does that mean it would be better to face the problem every time, right from the start? In a new language, I might say yes.

> For these reasons, I wonder whether relying on a side
> effect of += to notify an object of changes is a good
> design.

But that's not the only question when you're proposing a breaking change. Even if it's a bad design, is it worth going out of our way to break code that relies on that design?

From python at 2sn.net  Sun Feb 15 09:28:20 2015
From: python at 2sn.net (Alexander Heger)
Date: Sun, 15 Feb 2015 19:28:20 +1100
Subject: [Python-ideas] Augmented assignment [was Re: Adding "+" and
 "+=" operators to dict]
In-Reply-To: <D4ADE53C-1128-4C98-BF85-D99BD611A31B@yahoo.com>
References: <20150213061904.GL2498@ando.pearwood.info>
 <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid>
 <54DEA296.2040906@canterbury.ac.nz>
 <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
 <54DEB6FC.3050405@canterbury.ac.nz>
 <4BE246CC-0208-4675-B783-F217B772CCF1@yahoo.com>
 <54DF9E1A.1030007@stoneleaf.us>
 <CALGmxELSg6EN2+g=Neq7uCOfSzmMfFCcuKEnMo=N9qvK0RKZBQ@mail.gmail.com>
 <20150215013049.GB2498@ando.pearwood.info>
 <987C44CA-736D-41EE-BF64-1B60528E297E@yahoo.com>
 <54E0388D.1000508@canterbury.ac.nz>
 <D4ADE53C-1128-4C98-BF85-D99BD611A31B@yahoo.com>
Message-ID: <CAN3CYHy07FFU4wRaOPGas9c+DyUgMYvNhddgH7NCwqX1YAraOw@mail.gmail.com>

> But that's not the only question when you're proposing a breaking change. Even if it's a bad design, is it worth going out of our way to break code that relies on that design?

Do you know any code that relies on that design?

From ncoghlan at gmail.com  Sun Feb 15 10:59:56 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 15 Feb 2015 19:59:56 +1000
Subject: [Python-ideas] A (meta)class algebra
In-Reply-To: <CAGRr6BE+DMOyNOUC0DzdyhfP5zb3tk1voKxU9N0qYW=5oNxPwg@mail.gmail.com>
References: <CAK9R32TcAcaeAWAChTkM5qkw+k7OzkMMg+pt=2LW-w4+vZMs4g@mail.gmail.com>
 <CA+=+wqCoPo8WH9SqOzfqps6Jr66zKs+d_nqXOCujkpha7Ln3_w@mail.gmail.com>
 <CADiSq7dBUzo4Yt9DT6B1KNjJ6s4azHw-Xv0ZYXgNATtfo5nK_A@mail.gmail.com>
 <CA+=+wqBGNGL9L6O2oO_NUnLW-+SmVaiyG1SATNxLayhK+w7JtQ@mail.gmail.com>
 <CAK9R32TO3EDtcB4K6oYp3SPqHbVkyErA9svULLSCuPP5Y2E6eA@mail.gmail.com>
 <CAGRr6BE+DMOyNOUC0DzdyhfP5zb3tk1voKxU9N0qYW=5oNxPwg@mail.gmail.com>
Message-ID: <CADiSq7eb9eWWn828CCCHtR5SAvXStOo7znfOJUCamOBVwZ6mDw@mail.gmail.com>

On 14 February 2015 at 03:27, Antony Lee <antony.lee at berkeley.edu> wrote:
> While PEP422 is somewhat useful for heavy users of metaclasses (actually not
> necessarily so heavy; as mentioned in the thread you'll hit the problem as
> soon as you want to make e.g. a QObject also instance of another metaclass),
> I don't like it too much because it feels really ad hoc (adding yet another
> layer to the already complex class initialization machinery), whereas
> dynamically creating the sub-metaclass (inheriting from all the required
> classes) seems to be the "right" solution.

The need to better explain the rationale for the design was one of the
pieces of feedback from the previous review.

__init_class__ is a pattern extraction refactoring (one first applied
successfully by the Zope project more than 10 years ago). A fairly
common usage model for metaclasses is to provide class creation time
behaviour that gets inherited by subclasses, but otherwise has no
effect on the runtime behaviour of the class or subclasses. Currently,
the only way to obtain that is through a custom metaclass, which may
also impact other class behaviours, and hence they don't combine
readily, and you can't add one to an existing public class without
breaking backwards compatibility.

That pattern is common enough to potentially be worth pulling out into
a mechanism that *can't* have a runtime impact on class object
behaviour, expecially since an existing class could adopt such a
feature while remaining backwards compatible with subclasses that add
a custom metaclass.

While this does add a second mechanism to achieve something that a
custom metaclass can already do, developers are still left with a
relatively straightforward design decision: if you need to impact the
runtime behaviour of the class or its subclasses (the way abc.ABCMeta
and enum.EnumMeta do), then you still need a custom metaclass. If you
only need inherited class definition time behaviour, and only need to
support versions of Python that support __init_class__, then you
should use it is instead, as its a much simpler mechanism for people
to understand, and hence presents much lower barriers to future
maintainability.

That kind of "if the new, more specific, mechanism applies, you should
use it, as it will be easier to write, read and maintain" situation is
the result you're looking for any time you're attempting to do pattern
extraction like this.

The key problem with custom metaclasses is that once they're in the
mix, all bets regarding type system behaviour are off - you can use
those to make classes do pretty much anything you want (the abc
module, enum and SQL ORMs are readily available examples, while
systems like Zope and PEAK show just how much customisation of the
type system can already be done just through the application of custom
metaclasses).

Asking people to learn both how metaclasses in general work *and then*
how a  new automatic metaclass combination works before they can
maintain your code effectively doesn't remove any barriers to project
maintainability - instead, it adds a new one.

> In a sense, using an explicit
> and ad hoc mixer ("__add__") would be like saying that every class should
> know when it is part of a multiple inheritance chain, whereas the implicit
> ("multi_meta") mixer just relies on Python's MRO to do the right thing.  Of
> course, best would be that the "multi_meta" step be directly implemented at
> the language level, and I agree that having to add it manually isn't too
> beautiful.

Custom metaclasses are inherently hard to understand. When type is a
subclass of object, while object is itself an instance of type, and
when using a custom metaclass lets you completely rewrite the rules
for class creation, instance creation, attribute lookup, subclassing
checks, instance checks, and more, learning how to wield them
effectively does give you great power, but it also means recognising
that every time you decide to use one you're significantly reducing
the number of people that are going to be able to effectively
contribute to and help maintain that specific part of your project.
When you do decide its worthwhile to introduce one, it's generally
because the additional complexity in the class hierarchy
implementation pays for itself elsewhere in code that is more commonly
modified (for example, compare the number of ABCs, enumerations and
ORM table definitions to the number of ABC implementations,
enumeration implementations and ORM implementations).

The key goal of PEP 422 is to take a significant number of use cases
that currently demand the use of a custom metaclass and instead make
it possible to implement, understand and maintain them with roughly
the same level of knowledge of the inner workings of Python's runtime
type system as is needed to implement a custom __new__ method.

> __init_class__ could then simply be implemented as part of a normal
> metaclass, from which you could inherit without worrying about other
> metaclasses.

__init_class__ *does* get implemented as part of a metaclass: type.
type anchors the standard library's metaclass hierarchy (and most
other metaclass hierarchys) the way object anchors the overall object
hierarchy.

Once you grasp how type can be a subclass of object, while object is
itself an instance of type, then you've tackled pretty much the most
brainbending part of how the runtime type system bootstraps itself :)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From ncoghlan at gmail.com  Sun Feb 15 11:08:18 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 15 Feb 2015 20:08:18 +1000
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CA+=+wqDMrXzPaGuAc0orjUO5x86SqnFh13a7b7999amQFh3q5A@mail.gmail.com>
References: <CAK9R32Sq0z+E1W5e3KK=N2UbPC2e-eVqF_kxWX7yT=YfzHYWmg@mail.gmail.com>
 <CA+=+wqDMrXzPaGuAc0orjUO5x86SqnFh13a7b7999amQFh3q5A@mail.gmail.com>
Message-ID: <CADiSq7f7P8r3H892VgGSfeeo-ocMEHsQh=hv1DO4ZAXR=6Ou9w@mail.gmail.com>

On 14 February 2015 at 20:23, Petr Viktorin <encukou at gmail.com> wrote:
> On Sat, Feb 14, 2015 at 10:34 AM, Martin Teichmann
> <lkb.teichmann at gmail.com> wrote:
>
>> I think that implementing PEP 422 as part of the language only
>> makes sense if we once would be able to drop metaclasses
>> altogether. I thought about adding a __new_class__ to PEP 422
>> that would simulate the __new__ in metaclasses, thinking that
>> this is the only way metaclasses are used.
>
> Well, if you want Python to drop metaclasses, the way starts with PEP
> 422. You have to introduce the alternative first, and then wait a
> *really* long time until you drop a feature.
> I think __new_class__ can be added after PEP 422 is in, if it turns
> out to be necessary.

There's nothing wrong with metaclasses per se - they work fine for
implementing things like abc.ABC, enum.Enum, SQL ORMs and more.
There's just a significant subset of their current use cases that
doesn't need their full power and could benefit from having a simpler
alternative mechanism available.

PEP 422 aims to carve out that subset and provide a simpler way of
doing the same thing. A pure Python equivalent has existed in Zope for
over a decade (which is referenced from the PEP after someone pointed
it out in one of the earlier review rounds).

If someone had the time to update the PEP to address the issues raised
in the last round of reviews, I'd be happy to add a third co-author :)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From ncoghlan at gmail.com  Sun Feb 15 11:10:01 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 15 Feb 2015 20:10:01 +1000
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <-8276282515560390828@unknownmsgid>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
 <CAEbHw4bJOz1w82=Nt3ZU=3o4h-fx9FKzE+dfS4ng=N2-rNLoKQ@mail.gmail.com>
 <CALGmxE+E0O5y4J-NqEQomt-THEVWoPcnJSXrRXJrn-sN6XQKRA@mail.gmail.com>
 <CAMpsgwbCeHW8eHA+EidNDVB9tE7EKFJESBEocy2jK2okwaR6pA@mail.gmail.com>
 <-8276282515560390828@unknownmsgid>
Message-ID: <CADiSq7dT47hndDQkb_5rR9cFBfchGBxC0ZWyNsuFK=iXZqjUxw@mail.gmail.com>

On 15 February 2015 at 02:10, Chris Barker - NOAA Federal
<chris.barker at noaa.gov> wrote:
>
> On Feb 14, 2015, at 5:26 AM, Victor Stinner <victor.stinner at gmail.com>
> wrote:
>
> Do you propose a builtin function? I would prefer to put it in the math
> module.
>
> Yes, the math module is the proposed place for it.
>
> may add a new method to unittest.TestCase? It would show all parameters on
> error. .assertTrue(isclose(...)) doesn't help on error.
>
> Good point -- I'm not much OS a unittest user -- if someone wants to a
> unittest method, I think that would be fine.
>
> I did originally have that in the PEP, but it got distracting, so Guido
> suggested it be removed.

I don't think it needs to be in the PEP - assuming the PEP gets
accepted, we can do a follow-up RFE for unittest.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From chris.barker at noaa.gov  Sun Feb 15 18:56:46 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Sun, 15 Feb 2015 09:56:46 -0800
Subject: [Python-ideas] Augmented assignment [was Re: Adding "+" and
 "+=" operators to dict]
In-Reply-To: <20150215013049.GB2498@ando.pearwood.info>
References: <20150213061904.GL2498@ando.pearwood.info>
 <3781943608761054828@unknownmsgid>
 <81E99F71-29C3-41CE-B53F-8D2102E689E4@yahoo.com>
 <-2223593909575252041@unknownmsgid>
 <54DEA296.2040906@canterbury.ac.nz>
 <341D962C-DB8A-47A9-810C-358D78CAEB53@yahoo.com>
 <54DEB6FC.3050405@canterbury.ac.nz>
 <4BE246CC-0208-4675-B783-F217B772CCF1@yahoo.com>
 <54DF9E1A.1030007@stoneleaf.us>
 <CALGmxELSg6EN2+g=Neq7uCOfSzmMfFCcuKEnMo=N9qvK0RKZBQ@mail.gmail.com>
 <20150215013049.GB2498@ando.pearwood.info>
Message-ID: <CALGmxEJTqtrwtf5fLFes4-AjVENEsPkSTZeV9HVw2K90uSA4xw@mail.gmail.com>

On Sat, Feb 14, 2015 at 5:30 PM, Steven D'Aprano <steve at pearwood.info>
wrote:

> On Sat, Feb 14, 2015 at 11:41:52AM -0800, Chris Barker wrote:
> > I have argued that this never would have come up if augmented assignment
> > were only used for in-place operations,
>
> And it would never happen if augmented assignment *never* was used for
> in-place operations.


Sure -- but then it would only be syntactic sugar, and mostly just for
incrementing integers.

I'm pretty sure when this was all added, there were two key use-cases:

numpy wanted an operator for in=place operations, and lots of folks wanted
an easy way to increment integers, etc. This was a way to kill two brids
with one stone -- but resulted in good way for folks to get confused.

Operator overloading fundamentally means that the definition of operators
depends on the operands, but it's better if there isn't a totally different
concept there as well. in-place operations and operating then-reassigning
are conceptually very different.

But that cat is out of the bag -- the question is: is there arw ay to limit
the surprises -- can we either let "Mutating an object inside an immutable"
work, or, at least, have it raise an exception before doing anything.

Is suspect the answer to that is no, however :-(

-Chris


Alas, there's no way to enforce the rule that __iadd__ doesn't modify
> objects in place,


sure -- though it could not do that for any built-in and be discouraged in
the docs and PEP 8.


> and it actually is a nice optimization when they can
> do so.
>

More than nice -- critical, and one of the original key use-cases.

OH well.

-Chris


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150215/5edd357d/attachment.html>

From chris.barker at noaa.gov  Sun Feb 15 19:22:45 2015
From: chris.barker at noaa.gov (Chris Barker - NOAA Federal)
Date: Sun, 15 Feb 2015 10:22:45 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CADiSq7dT47hndDQkb_5rR9cFBfchGBxC0ZWyNsuFK=iXZqjUxw@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
 <CAEbHw4bJOz1w82=Nt3ZU=3o4h-fx9FKzE+dfS4ng=N2-rNLoKQ@mail.gmail.com>
 <CALGmxE+E0O5y4J-NqEQomt-THEVWoPcnJSXrRXJrn-sN6XQKRA@mail.gmail.com>
 <CAMpsgwbCeHW8eHA+EidNDVB9tE7EKFJESBEocy2jK2okwaR6pA@mail.gmail.com>
 <-8276282515560390828@unknownmsgid>
 <CADiSq7dT47hndDQkb_5rR9cFBfchGBxC0ZWyNsuFK=iXZqjUxw@mail.gmail.com>
Message-ID: <4393064190549273611@unknownmsgid>

> Good point -- I'm not much OS a unittest user -- if someone wants to a
>> unittest method, I think that would be fine.
> I don't think it needs to be in the PEP - assuming the PEP gets
> accepted, we can do a follow-up RFE for unittest.

Sounds good -- thanks.

-Chris

From mistersheik at gmail.com  Sun Feb 15 22:52:42 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Sun, 15 Feb 2015 13:52:42 -0800 (PST)
Subject: [Python-ideas] Allow parentheses to be used with "with" block
Message-ID: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>

It's great that multiple context managers can be sent to "with":

with a as b, c as d, e as f:
     suite

If the context mangers have a lot of text it's very hard to comply with 
PEP8 without resorting to "\" continuations, which are proscribed by the 
Google style guide.

Other statements like import and if support enclosing their arguments in 
parentheses to force aligned continuations.  Can we have the same for 
"with"?

E.g.

with (a as b,
         c as d,
         e as f):
    suite

This is a pretty minor change to the grammar and parser.

Best,

Neil
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150215/681e1f42/attachment.html>

From mistersheik at gmail.com  Sun Feb 15 23:16:36 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Sun, 15 Feb 2015 14:16:36 -0800 (PST)
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CA+=+wqDwQ-1=PzJcMV0LPkkXgQdbN0qJSoaYDv+sXKDims6QHg@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <CAN1YFWt4C6S-8ST-=7Z35iXu827p1Yjk4+FaMyK5ZvDmFastUA@mail.gmail.com>
 <CA+=+wqDwQ-1=PzJcMV0LPkkXgQdbN0qJSoaYDv+sXKDims6QHg@mail.gmail.com>
Message-ID: <b16ff954-f66c-47fa-87ba-2e1187ff0748@googlegroups.com>

Can't "|" return a view?

On Thursday, February 12, 2015 at 9:53:12 AM UTC-5, Petr Viktorin wrote:
>
> I don't see ChainMap mentioned in this thread, so I'll fix that:
>
> On Thu, Feb 12, 2015 at 2:32 PM, Juancarlo A?ez <apa... at gmail.com 
> <javascript:>> wrote:
>
> > most common cases would be:
> >
> >
> >     new_dict = a + b
> >     old_dict += a
>
> In today's Python, that's:
> from collections import ChainMap
> new_dict = dict(ChainMap(b, a))
> old_dict.update(a)
>
> >>     new_dict = {}
> >>     for old_dict in (a, b, c, d):
> >>         new_dict.update(old_dict)
> >
> >
> > It would be easier if we could do:
> >
> >     new_dict = {}
> >     new_dict.update(a, b, c, d)
> >
> > It would also be useful if dict.update() returned self, so this would be
> > valid:
> >
> >     new_dict = {}.update(a, b, c, d)
>
> In today's Python:
> new_dict = dict(ChainMap(d, b, c, a))
>
> Many uses don't need the dict() call ? e.g. when passing it **kwargs,
> or when it's more useful as a view.
>
> Personally, the lack of a special operator for this has never bothered me.
> _______________________________________________
> Python-ideas mailing list
> Python... at python.org <javascript:>
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150215/1a3b3e70/attachment.html>

From lkb.teichmann at gmail.com  Sun Feb 15 23:30:40 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Sun, 15 Feb 2015 23:30:40 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
Message-ID: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>

Hi again,

staring at the code for hours, I just realized that there is
a very simple yet powerful solution to the metaclass merging
problem. The changes to be made are so simple there isn't
even a need to change the documentation!

The current algorithm to find a proper metaclass looks for
a class all other metaclasses are a subtype of. For that
it uses PyType_IsSubtype. We could simply change that
to PyObject_IsSubclass. This would give a great hook into
the system: we just need to intercept the __subclasshook__,
and we have a hook into the metaclass algorithm!

Backwards compatibility should not be a problem: how many
metaclasses are out there whose metaclass (so the
meta-meta-class) overwrites the __subclasshook__?

I changed the code appropriately, and also added a library
that uses this hook. It defines a class (called Dominant for
technical reasons, I'm waiting for suggestions for a better
name), which acts as a baseclass for metaclasses. All
metaclasses inheriting from this baseclass are combined
automatically (the algorithm doing that could still be improved,
but it works).

While I was already at it, I also added my variant of PEP 422,
which fits already into this metaclass scheme.

The code is at
https://github.com/tecki/cpython/commits/metaclass-issubclass

Greetings

Martin

From mistersheik at gmail.com  Sun Feb 15 23:19:06 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Sun, 15 Feb 2015 14:19:06 -0800 (PST)
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <20150212170530.GE2498@ando.pearwood.info>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <CAN1YFWt4C6S-8ST-=7Z35iXu827p1Yjk4+FaMyK5ZvDmFastUA@mail.gmail.com>
 <20150212170530.GE2498@ando.pearwood.info>
Message-ID: <2c8f6a39-28b2-450e-ac0f-70a29ce52b5a@googlegroups.com>

What if we create a view (ChainMap) to be the result of the | operator?  Or 
would that not work?

On Thursday, February 12, 2015 at 12:06:14 PM UTC-5, Steven D'Aprano wrote:
>
> On Thu, Feb 12, 2015 at 09:02:04AM -0430, Juancarlo A?ez wrote: 
> > On Thu, Feb 12, 2015 at 6:13 AM, Steven D'Aprano <st... at pearwood.info 
> <javascript:>> 
> > wrote: 
> > 
> > > I certainly wouldn't want to write 
> > > 
> > >     new_dict = a + b + c + d 
> > > 
> > > and have O(N**2) performance when I can do this instead 
> > > 
> > 
> > It would likely not be O(N), but O(N**2) seems exaggerated. 
>
> If you want to be pedantic, it's not exactly quadratic behaviour. But 
> it would be about the same as as list and tuple repeated addition, 
> which is also not exactly quadratic behaviour, but we often describe 
> it as such. Repeated list addition is *much* slower than O(N): 
>
> py> from timeit import Timer 
> py> t = Timer("sum(([1] for i in range(100)), [])")  # add 100 lists 
> py> min(t.repeat(number=100)) 
> 0.010571401566267014 
> py> t = Timer("sum(([1] for i in range(1000)), [])")  # 10 times more 
> py> min(t.repeat(number=100)) 
> 0.4032209049910307 
>
>
> So increasing the number of lists being added by a factor of 10 leads to 
> a factor of 40 increase in time taken. Increase it by a factor of 10 
> again: 
>
> py> t = Timer("sum(([1] for i in range(10000)), [])")  # 10 times more 
> py> min(t.repeat(number=100)) # note the smaller number of trials 
> 34.258460350334644 
>
> and we now have the time goes up by a factor not of 10, not of 40, but 
> about 85. So the slowdown is getting worse. It's technically not as bad 
> as quadratic behaviour, but its bad enough to justify the term. 
>
> Will repeated dict addition be like this? Since dicts are mutable, we 
> can't use the string concatenation trick to optimize each operation, so 
> I expect so: + will have to copy each of its arguments into a new dict, 
> then the next + will copy its arguments into a new dict, and so on. 
> Exactly what happens with list. 
>
> -- 
> Steve 
> _______________________________________________ 
> Python-ideas mailing list 
> Python... at python.org <javascript:> 
> https://mail.python.org/mailman/listinfo/python-ideas 
> Code of Conduct: http://python.org/psf/codeofconduct/ 
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150215/b7f5f0c6/attachment.html>

From mistersheik at gmail.com  Sun Feb 15 23:14:43 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Sun, 15 Feb 2015 14:14:43 -0800 (PST)
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
Message-ID: <f5693415-0e68-4212-8d03-f07e2a53d65a@googlegroups.com>

I'm also +1, I agree with Donald that "|" makes more sense to me than "+" 
if only for consistency with sets.  In mathematics a mapping is a set of 
pairs (preimage, postimage), and we are taking the union of these sets.

On Thursday, February 12, 2015 at 7:26:20 AM UTC-5, Donald Stufft wrote:
>
>
> > On Feb 12, 2015, at 5:43 AM, Steven D'Aprano <st... at pearwood.info 
> <javascript:>> wrote:
> > 
> > On Thu, Feb 12, 2015 at 01:07:52AM -0800, Ian Lee wrote:
> >> Alright, I've tried to gather up all of the feedback and organize it in
> >> something approaching the alpha draft of a PEP might look like::
> >> 
> >> 
> >> Proposed New Methods on dict
> >> ============================
> >> 
> >> Adds two dicts together, returning a new object with the type of the 
> left
> >> hand operand. This would be roughly equivalent to calling:
> >> 
> >>>>> new_dict = old_dict.copy();
> >>>>> new_dict.update(other_dict)
> > 
> > A very strong -1 on the proposal. We already have a perfectly good way 
> > to spell dict += , namely dict.update. As for dict + on its own, we have 
> > a way to spell that too: exactly as you write above.
> > 
> > I think this is a feature that is more useful in theory than in 
> > practice. Which we already have a way to do a merge in place, and a 
> > copy-and-merge seems like it should be useful but I'm struggling to 
> > think of any use-cases for it. I've never needed this, and I've never 
> > seen anyone ask how to do this on the tutor or python-list mailing 
> > lists.
>
> I?ve wanted this several times, explicitly the copying variant of it. I 
> always
> get slightly annoyed whenever I have to manually spell out the copy and the
> update.
>
> I still think it should use | rather than + though, to match sets.
>
> ---
> Donald Stufft
> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
>
> _______________________________________________
> Python-ideas mailing list
> Python... at python.org <javascript:>
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150215/5341646a/attachment.html>

From mistersheik at gmail.com  Mon Feb 16 00:05:53 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Sun, 15 Feb 2015 15:05:53 -0800 (PST)
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <CALvWhxtyPU7SWRoC2zs1eQNWy6oisvy2Rc2+BVaXzZvU_5sU6w@mail.gmail.com>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <1423604000.371468.225837961.394D3535@webmail.messagingengine.com>
 <CALvWhxtyPU7SWRoC2zs1eQNWy6oisvy2Rc2+BVaXzZvU_5sU6w@mail.gmail.com>
Message-ID: <9bb4bbc3-dbc8-4e60-8bf7-75267525e0b3@googlegroups.com>

Is this true?  I thought that b can be immediately deleted early since it 
can no longer be referenced.  C++ has guaranteed object lifetimes.  I 
didn't think Python was the same way.

On Tuesday, February 10, 2015 at 4:52:10 PM UTC-5, Chris Kaynor wrote:
>
> On Tue, Feb 10, 2015 at 1:33 PM,  <rand... at fastmail.us <javascript:>> 
> wrote: 
> > Is cpython capable of noticing that the variable is never referenced (or 
> > is only assigned) after this point and dropping it early? Might be a 
> > good improvement to add if not. 
>
> I'm not sure that such would be viable in the general case, as 
> assignment of classes could have side-effects. Consider the following 
> code (untested): 
>
> class MyClass: 
>     def __init__(self): 
>         global a 
>         a = 1 
>     def __del__(self): 
>         global a 
>         a = 2 
>
> def myFunction(): 
>     b = MyClass() # Note that b is not referenced anywhere - it is 
> local and not used in the scope or a internal scope. 
>     print(a) 
>
> myFunction() 
>
> Without the optimization (existing behavior), this will print 1. With 
> the optimization, MyClass could be garbage collected early (as it 
> cannot be referenced), and thus the code would print 2 (in CPython, 
> but it could also be a race condition or otherwise undetermined 
> whether 1 or 2 will be printed). 
>
> I'm not saying that it is a good idea to write such a class, but a 
> change to how variables are assigned would be backwards-incompatible, 
> and could result in very odd behavior in somebody's code. 
>
> Chris 
> _______________________________________________ 
> Python-ideas mailing list 
> Python... at python.org <javascript:> 
> https://mail.python.org/mailman/listinfo/python-ideas 
> Code of Conduct: http://python.org/psf/codeofconduct/ 
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150215/e1590b01/attachment-0001.html>

From ben+python at benfinney.id.au  Mon Feb 16 00:35:26 2015
From: ben+python at benfinney.id.au (Ben Finney)
Date: Mon, 16 Feb 2015 10:35:26 +1100
Subject: [Python-ideas] Allow parentheses to be used with "with" block
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
Message-ID: <85fva6d9z5.fsf@benfinney.id.au>

Neil Girdhar <mistersheik at gmail.com>
writes:

> Other statements like import and if support enclosing their arguments in 
> parentheses to force aligned continuations.  Can we have the same for 
> "with"?
>
> E.g.
>
> with (a as b,
>          c as d,
>          e as f):
>     suite

And even the more consistent::

    with (
            a as b,
            c as d,
            e as f):
        suite

> This is a pretty minor change to the grammar and parser.

+1. It frequently surprises me that this isn't already legal syntax, and
I'd very much like to see this improvement.

-- 
 \        ?Somebody told me how frightening it was how much topsoil we |
  `\   are losing each year, but I told that story around the campfire |
_o__)                             and nobody got scared.? ?Jack Handey |
Ben Finney


From python at mrabarnett.plus.com  Mon Feb 16 00:37:34 2015
From: python at mrabarnett.plus.com (MRAB)
Date: Sun, 15 Feb 2015 23:37:34 +0000
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
Message-ID: <54E12DBE.80909@mrabarnett.plus.com>

On 2015-02-15 21:52, Neil Girdhar wrote:
> It's great that multiple context managers can be sent to "with":
>
> with a as b, c as d, e as f:
>       suite
>
> If the context mangers have a lot of text it's very hard to comply with
> PEP8 without resorting to "\" continuations, which are proscribed by the
> Google style guide.
>
> Other statements like import and if support enclosing their arguments in
> parentheses to force aligned continuations.  Can we have the same for
> "with"?
>
> E.g.
>
> with (a as b,
>           c as d,
>           e as f):
>      suite
>
> This is a pretty minor change to the grammar and parser.
>
It's not as minor as you think, because the part before the 'as' is an
expression, and expressions can start with a parenthesis.

For example, how do you distinguish between:

     with (as a b):

and:

     with (a) as b:


From ben+python at benfinney.id.au  Mon Feb 16 00:47:43 2015
From: ben+python at benfinney.id.au (Ben Finney)
Date: Mon, 16 Feb 2015 10:47:43 +1100
Subject: [Python-ideas] Allow parentheses to be used with "with" block
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <54E12DBE.80909@mrabarnett.plus.com>
Message-ID: <854mqmd9eo.fsf@benfinney.id.au>

MRAB <python at mrabarnett.plus.com> writes:

> For example, how do you distinguish between:
>
>     with (as a b):

That's ?with (a as b):?, I think you mean.

> and:
>
>     with (a) as b:

Are we expecting those two to have different effects?

-- 
 \      ?It's up to the masses to distribute [music] however they want |
  `\    ? The laws don't matter at that point. People sharing music in |
_o__)        their bedrooms is the new radio.? ?Neil Young, 2008-05-06 |
Ben Finney


From ron3200 at gmail.com  Mon Feb 16 00:51:56 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Sun, 15 Feb 2015 18:51:56 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <f5693415-0e68-4212-8d03-f07e2a53d65a@googlegroups.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
 <f5693415-0e68-4212-8d03-f07e2a53d65a@googlegroups.com>
Message-ID: <mbrbet$4es$1@ger.gmane.org>



On 02/15/2015 05:14 PM, Neil Girdhar wrote:
> I'm also +1, I agree with Donald that "|" makes more sense to me than "+"
> if only for consistency with sets.  In mathematics a mapping is a set of
> pairs (preimage, postimage), and we are taking the union of these sets.

An option might be to allow a union "|" if both the (key, value) pairs that 
match in each are the same.  Other wise, raise an exception.

Ron


From ron3200 at gmail.com  Mon Feb 16 01:03:26 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Sun, 15 Feb 2015 19:03:26 -0500
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <854mqmd9eo.fsf@benfinney.id.au>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <54E12DBE.80909@mrabarnett.plus.com> <854mqmd9eo.fsf@benfinney.id.au>
Message-ID: <mbrc4f$evu$1@ger.gmane.org>



On 02/15/2015 06:47 PM, Ben Finney wrote:
> MRAB<python at mrabarnett.plus.com>  writes:
>
>> >For example, how do you distinguish between:
>> >
>> >     with (as a b):
> That's ?with (a as b):?, I think you mean.
>
>> >and:
>> >
>> >     with (a) as b:
> Are we expecting those two to have different effects?

Or should it be...

     With a, c, e as b, d, f:


This would be more in line with "=" and "in".  And you might be able to use 
an iterable to give a, c, and e. It's values.

Cheers,
    Ron


From tjreedy at udel.edu  Mon Feb 16 01:27:47 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Sun, 15 Feb 2015 19:27:47 -0500
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
Message-ID: <mbrdi4$3qk$1@ger.gmane.org>

On 2/15/2015 4:52 PM, Neil Girdhar wrote:
> It's great that multiple context managers can be sent to "with":
>
> with a as b, c as d, e as f:
>       suite
>
> If the context mangers have a lot of text it's very hard to comply with
> PEP8 without resorting to "\" continuations, which are proscribed by the
> Google style guide.

Untrue.  " Backslashes may still be appropriate at times. For example, 
long, multiple with -statements cannot use implicit continuation, so 
backslashes are acceptable:

with open('/path/to/some/file/you/want/to/read') as file_1, \
      open('/path/to/some/file/being/written', 'w') as file_2:
     file_2.write(file_1.read())"

> Other statements like import and if support enclosing their arguments in
> parentheses to force aligned continuations.  Can we have the same for
> "with"?

No. Considered and rejected because it would not be trivial.

-- 
Terry Jan Reedy


From ianlee1521 at gmail.com  Mon Feb 16 01:49:20 2015
From: ianlee1521 at gmail.com (Ian Lee)
Date: Sun, 15 Feb 2015 16:49:20 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CAMpsgwbCeHW8eHA+EidNDVB9tE7EKFJESBEocy2jK2okwaR6pA@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
 <CAEbHw4bJOz1w82=Nt3ZU=3o4h-fx9FKzE+dfS4ng=N2-rNLoKQ@mail.gmail.com>
 <CALGmxE+E0O5y4J-NqEQomt-THEVWoPcnJSXrRXJrn-sN6XQKRA@mail.gmail.com>
 <CAMpsgwbCeHW8eHA+EidNDVB9tE7EKFJESBEocy2jK2okwaR6pA@mail.gmail.com>
Message-ID: <CA+GyjM=1K0n22VQ67_rkL1Qi3-YGLoWSaZhiEFbABOo=uSMoeQ@mail.gmail.com>

+1 to adding a new method on unittest.TestCase.


~ Ian Lee

On Sat, Feb 14, 2015 at 5:26 AM, Victor Stinner <victor.stinner at gmail.com>
wrote:

> Hi,
>
> Do you propose a builtin function? I would prefer to put it in the math
> module.
>
> You may add a new method to unittest.TestCase? It would show all
> parameters on error. .assertTrue(isclose(...)) doesn't help on error.
>
> Victor
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150215/93050faa/attachment-0001.html>

From steve at pearwood.info  Mon Feb 16 02:19:03 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Mon, 16 Feb 2015 12:19:03 +1100
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <mbrdi4$3qk$1@ger.gmane.org>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <mbrdi4$3qk$1@ger.gmane.org>
Message-ID: <20150216011903.GL2498@ando.pearwood.info>

On Sun, Feb 15, 2015 at 07:27:47PM -0500, Terry Reedy wrote:
> On 2/15/2015 4:52 PM, Neil Girdhar wrote:
> >It's great that multiple context managers can be sent to "with":
> >
> >with a as b, c as d, e as f:
> >      suite
> >
> >If the context mangers have a lot of text it's very hard to comply with
> >PEP8 without resorting to "\" continuations, which are proscribed by the
> >Google style guide.
...^^^^^^^^^^^^^^^^^^

 
> Untrue.  " Backslashes may still be appropriate at times. For example, 
> long, multiple with -statements cannot use implicit continuation, so 
> backslashes are acceptable:
> 
> with open('/path/to/some/file/you/want/to/read') as file_1, \
>      open('/path/to/some/file/being/written', 'w') as file_2:
>     file_2.write(file_1.read())"

Isn't that a quote from PEP 8, rather than Google's style guide?


> >Other statements like import and if support enclosing their arguments in
> >parentheses to force aligned continuations.  Can we have the same for
> >"with"?
> 
> No. Considered and rejected because it would not be trivial.

Many things are not trivial. Surely there should be a better reason than 
*just* "it is hard to do" to reject something?

Do you have a reference for this being rejected? There is an open issue 
on the tracker:

http://bugs.python.org/issue12782

As far as I can tell, the last comment of any substance was Nick 
suggesting that there *may* be a syntactic ambiguity between parentheses 
around the entire with statement and the parens around sub-expressions:

with (
      (this or that and other) as spam,
      (some_really_long_name(a, b, c, kw=d)
           .xyz['key']()) as eggs,
      mything() or yourthing(
                             alpha, beta, gamma,
                             delta, epsilon,
                            ) as cheese
      ):
    block


I don't know enough about Python's parser to do more than guess, but I 
guess that somebody may need to actually try extending the grammar to 
support this and see what happens?



-- 
Steve

From ben+python at benfinney.id.au  Mon Feb 16 02:21:23 2015
From: ben+python at benfinney.id.au (Ben Finney)
Date: Mon, 16 Feb 2015 12:21:23 +1100
Subject: [Python-ideas] Allow parentheses to be used with "with" block
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <mbrdi4$3qk$1@ger.gmane.org>
Message-ID: <85zj8ebqi4.fsf@benfinney.id.au>

Terry Reedy <tjreedy at udel.edu> writes:

> On 2/15/2015 4:52 PM, Neil Girdhar wrote:
> > Other statements like import and if support enclosing their
> > arguments in parentheses to force aligned continuations. Can we have
> > the same for "with"?
>
> No. Considered and rejected because it would not be trivial.

Can you give a reference to that official rejection, so it can be found
in the context of this thread?

-- 
 \        ?It is the responsibility of intellectuals to tell the truth |
  `\                    and to expose lies.? ?Noam Chomsky, 1967-02-23 |
_o__)                                                                  |
Ben Finney


From mistersheik at gmail.com  Mon Feb 16 02:33:45 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Sun, 15 Feb 2015 20:33:45 -0500
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CAA68w_=kKmdpm+c_m1kmgxmkarwUuP6fBJxu0cGWbXJ3qA1Dkg@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <mbrdi4$3qk$1@ger.gmane.org>
 <CAA68w_=kKmdpm+c_m1kmgxmkarwUuP6fBJxu0cGWbXJ3qA1Dkg@mail.gmail.com>
Message-ID: <CAA68w_n0jnh7=nOwSy+xUpsf1HpB+JgMrh51=WhN-ng8+URdsg@mail.gmail.com>

Oh, hmm, I thought it would just be converting:

with_stmt: 'with' with_item (',' with_item)*  ':' suite

to

with_stmt: 'with' ('(' with_item (',' with_item)* ')' | with_item (','
with_item)*)  ':' suite

but I guess that is ambiguous for a LL(1) parser.

Well, maybe it's harder than I thought.  Someone better with grammars than
me might know a nice way around this.

Best,

Neil


On Sun, Feb 15, 2015 at 8:28 PM, Neil Girdhar <mistersheik at gmail.com> wrote:

>
>
> On Sun, Feb 15, 2015 at 7:27 PM, Terry Reedy <tjreedy at udel.edu> wrote:
>
>> On 2/15/2015 4:52 PM, Neil Girdhar wrote:
>>
>>> It's great that multiple context managers can be sent to "with":
>>>
>>> with a as b, c as d, e as f:
>>>       suite
>>>
>>> If the context mangers have a lot of text it's very hard to comply with
>>> PEP8 without resorting to "\" continuations, which are proscribed by the
>>> Google style guide.
>>>
>>
>> Untrue.  " Backslashes may still be appropriate at times. For example,
>> long, multiple with -statements cannot use implicit continuation, so
>> backslashes are acceptable:
>>
>
> Where are you looking at that?  The one I see is here:
>
> https://google-styleguide.googlecode.com/svn/trunk/pyguide.html#Line_length
>
> Explicitly says:
>
> Exceptions:
>
>    - Long import statements.
>    - URLs in comments.
>
> Do not use backslash line continuation.
>
>
>>
>> with open('/path/to/some/file/you/want/to/read') as file_1, \
>>      open('/path/to/some/file/being/written', 'w') as file_2:
>>     file_2.write(file_1.read())"
>>
>>  Other statements like import and if support enclosing their arguments in
>>> parentheses to force aligned continuations.  Can we have the same for
>>> "with"?
>>>
>>
>> No. Considered and rejected because it would not be trivial.
>>
>>
> If your argument is the amount of work, I might be able to find the time
> to do the work  *if someone will promise to review it quickly*.  I think
> it's not more than an afternoon to modify cpython.
>
>
>> --
>> Terry Jan Reedy
>>
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>> --
>>
>> --- You received this message because you are subscribed to a topic in
>> the Google Groups "python-ideas" group.
>> To unsubscribe from this topic, visit https://groups.google.com/d/
>> topic/python-ideas/y9rRQhVdMn4/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> python-ideas+unsubscribe at googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150215/0d0a686a/attachment.html>

From mistersheik at gmail.com  Mon Feb 16 02:28:51 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Sun, 15 Feb 2015 20:28:51 -0500
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <mbrdi4$3qk$1@ger.gmane.org>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <mbrdi4$3qk$1@ger.gmane.org>
Message-ID: <CAA68w_=kKmdpm+c_m1kmgxmkarwUuP6fBJxu0cGWbXJ3qA1Dkg@mail.gmail.com>

On Sun, Feb 15, 2015 at 7:27 PM, Terry Reedy <tjreedy at udel.edu> wrote:

> On 2/15/2015 4:52 PM, Neil Girdhar wrote:
>
>> It's great that multiple context managers can be sent to "with":
>>
>> with a as b, c as d, e as f:
>>       suite
>>
>> If the context mangers have a lot of text it's very hard to comply with
>> PEP8 without resorting to "\" continuations, which are proscribed by the
>> Google style guide.
>>
>
> Untrue.  " Backslashes may still be appropriate at times. For example,
> long, multiple with -statements cannot use implicit continuation, so
> backslashes are acceptable:
>

Where are you looking at that?  The one I see is here:

https://google-styleguide.googlecode.com/svn/trunk/pyguide.html#Line_length

Explicitly says:

Exceptions:

   - Long import statements.
   - URLs in comments.

Do not use backslash line continuation.


>
> with open('/path/to/some/file/you/want/to/read') as file_1, \
>      open('/path/to/some/file/being/written', 'w') as file_2:
>     file_2.write(file_1.read())"
>
>  Other statements like import and if support enclosing their arguments in
>> parentheses to force aligned continuations.  Can we have the same for
>> "with"?
>>
>
> No. Considered and rejected because it would not be trivial.
>
>
If your argument is the amount of work, I might be able to find the time to
do the work  *if someone will promise to review it quickly*.  I think it's
not more than an afternoon to modify cpython.


> --
> Terry Jan Reedy
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
> --
>
> --- You received this message because you are subscribed to a topic in the
> Google Groups "python-ideas" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/python-ideas/y9rRQhVdMn4/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> python-ideas+unsubscribe at googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150215/239a6d04/attachment-0001.html>

From python at mrabarnett.plus.com  Mon Feb 16 02:59:01 2015
From: python at mrabarnett.plus.com (MRAB)
Date: Mon, 16 Feb 2015 01:59:01 +0000
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <854mqmd9eo.fsf@benfinney.id.au>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <54E12DBE.80909@mrabarnett.plus.com> <854mqmd9eo.fsf@benfinney.id.au>
Message-ID: <54E14EE5.2070805@mrabarnett.plus.com>

On 2015-02-15 23:47, Ben Finney wrote:
> MRAB <python at mrabarnett.plus.com> writes:
>
>> For example, how do you distinguish between:
>>
>>     with (as a b):
>
> That's ?with (a as b):?, I think you mean.
>
Correct.

>> and:
>>
>>     with (a) as b:
>
> Are we expecting those two to have different effects?
>
That's not the question. The question is how easy they are to
distinguish between when parsing.

A longer example is:

     with (a) as b, c as d:

versus:

     with (a as b, c as d):

Imagine you've just parsed the 'with'. You know that it's followed by
an expression, and that an expression can start with '(', but now
you're saying that the '(' could also be the start of multiple
as-expressions. How can you tell one are you parsing?

The problem doesn't occur when parsing an 'import' statement because
_that_ doesn't use parentheses otherwise.


From python at mrabarnett.plus.com  Mon Feb 16 03:06:29 2015
From: python at mrabarnett.plus.com (MRAB)
Date: Mon, 16 Feb 2015 02:06:29 +0000
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CAA68w_n0jnh7=nOwSy+xUpsf1HpB+JgMrh51=WhN-ng8+URdsg@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <mbrdi4$3qk$1@ger.gmane.org>
 <CAA68w_=kKmdpm+c_m1kmgxmkarwUuP6fBJxu0cGWbXJ3qA1Dkg@mail.gmail.com>
 <CAA68w_n0jnh7=nOwSy+xUpsf1HpB+JgMrh51=WhN-ng8+URdsg@mail.gmail.com>
Message-ID: <54E150A5.7020806@mrabarnett.plus.com>

On 2015-02-16 01:33, Neil Girdhar wrote:
> Oh, hmm, I thought it would just be converting:
>
> with_stmt: 'with' with_item (',' with_item)*  ':' suite
>
> to
>
> with_stmt: 'with' ('(' with_item (',' with_item)* ')' | with_item (','
> with_item)*)  ':' suite
>
> but I guess that is ambiguous for a LL(1) parser.
>
> Well, maybe it's harder than I thought.  Someone better with grammars
> than me might know a nice way around this.
>
There might be a way around it, but it won't be nice! :-)


From ron3200 at gmail.com  Mon Feb 16 04:46:29 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Sun, 15 Feb 2015 22:46:29 -0500
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <mbrc4f$evu$1@ger.gmane.org>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <54E12DBE.80909@mrabarnett.plus.com> <854mqmd9eo.fsf@benfinney.id.au>
 <mbrc4f$evu$1@ger.gmane.org>
Message-ID: <mbrp6m$83d$1@ger.gmane.org>



On 02/15/2015 07:03 PM, Ron Adam wrote:
>
>
> On 02/15/2015 06:47 PM, Ben Finney wrote:
>> MRAB<python at mrabarnett.plus.com>  writes:
>>
>>> >For example, how do you distinguish between:
>>> >
>>> >     with (as a b):
>> That's ?with (a as b):?, I think you mean.
>>
>>> >and:
>>> >
>>> >     with (a) as b:
>> Are we expecting those two to have different effects?
>
> Or should it be...
>
>      With a, c, e as b, d, f:
>
>
> This would be more in line with "=" and "in".  And you might be able to use
> an iterable to give a, c, and e. It's values.

Actually it would be the reverse of "=" and "in".  The names being bound to 
are on the  right instead of the left.

But this choice has already been discussed.  So it's not really an option 
today.

The problem I see with parentheses in this case is they are only used to 
avoid using the line continuation char.  They neither represent a tuple, or 
an expression.

Ron
























From greg.ewing at canterbury.ac.nz  Mon Feb 16 06:32:34 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Mon, 16 Feb 2015 18:32:34 +1300
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <mbrp6m$83d$1@ger.gmane.org>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <54E12DBE.80909@mrabarnett.plus.com> <854mqmd9eo.fsf@benfinney.id.au>
 <mbrc4f$evu$1@ger.gmane.org> <mbrp6m$83d$1@ger.gmane.org>
Message-ID: <54E180F2.2060209@canterbury.ac.nz>

I *think* this is unambiguous:

    with a as b and c as d and e as f:
       ...

because the rule for a with statement is

with_stmt: 'with' with_item (',' with_item)*  ':' suite
with_item: test ['as' expr]

and expr doesn't include 'and'.

-- 
Greg

From abarnert at yahoo.com  Mon Feb 16 07:31:10 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Sun, 15 Feb 2015 22:31:10 -0800
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <54E180F2.2060209@canterbury.ac.nz>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <54E12DBE.80909@mrabarnett.plus.com> <854mqmd9eo.fsf@benfinney.id.au>
 <mbrc4f$evu$1@ger.gmane.org> <mbrp6m$83d$1@ger.gmane.org>
 <54E180F2.2060209@canterbury.ac.nz>
Message-ID: <56B34142-B8D9-457D-A9EF-4554025333F4@yahoo.com>

On Feb 15, 2015, at 21:32, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:

> I *think* this is unambiguous:
> 
>   with a as b and c as d and e as f:
>      ...
> 
> because the rule for a with statement is
> 
> with_stmt: 'with' with_item (',' with_item)*  ':' suite
> with_item: test ['as' expr]
> 
> and expr doesn't include 'and'.

How could expr not include and?

But... Isn't a with target a target, not an expr? (And, if not, shouldn't it be?) And targets don't include and. So this still works.

But how does it help? Where can you add parens, or otherwise solve the continuation issue, with and that you couldn't with commas?

From mistersheik at gmail.com  Mon Feb 16 06:40:41 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Sun, 15 Feb 2015 21:40:41 -0800 (PST)
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <54E180F2.2060209@canterbury.ac.nz>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <54E12DBE.80909@mrabarnett.plus.com> <854mqmd9eo.fsf@benfinney.id.au>
 <mbrc4f$evu$1@ger.gmane.org> <mbrp6m$83d$1@ger.gmane.org>
 <54E180F2.2060209@canterbury.ac.nz>
Message-ID: <71fbbfdc-f7cd-470c-89c4-bdaa6fe746d3@googlegroups.com>

Personally, I would rather wait for a day far away when Python removes the 
LL(1) grammar restriction than introduce weird new syntax.  Maybe when the 
reference standard Python is written in Python and Python has a good parser 
module.  With backtracking, it should be fine and unambiguous.

Best,

Neil

On Monday, February 16, 2015 at 12:33:23 AM UTC-5, Greg Ewing wrote:
>
> I *think* this is unambiguous: 
>
>     with a as b and c as d and e as f: 
>        ... 
>
> because the rule for a with statement is 
>
> with_stmt: 'with' with_item (',' with_item)*  ':' suite 
> with_item: test ['as' expr] 
>
> and expr doesn't include 'and'. 
>
> -- 
> Greg 
> _______________________________________________ 
> Python-ideas mailing list 
> Python... at python.org <javascript:> 
> https://mail.python.org/mailman/listinfo/python-ideas 
> Code of Conduct: http://python.org/psf/codeofconduct/ 
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150215/435f50bd/attachment.html>

From p.f.moore at gmail.com  Mon Feb 16 10:52:21 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 16 Feb 2015 09:52:21 +0000
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <71fbbfdc-f7cd-470c-89c4-bdaa6fe746d3@googlegroups.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <54E12DBE.80909@mrabarnett.plus.com>
 <854mqmd9eo.fsf@benfinney.id.au> <mbrc4f$evu$1@ger.gmane.org>
 <mbrp6m$83d$1@ger.gmane.org> <54E180F2.2060209@canterbury.ac.nz>
 <71fbbfdc-f7cd-470c-89c4-bdaa6fe746d3@googlegroups.com>
Message-ID: <CACac1F86kAL8ca=9-hGDoaZM7JfNARPzNYqH=yAtLyMPRSw8ig@mail.gmail.com>

On 16 February 2015 at 05:40, Neil Girdhar <mistersheik at gmail.com> wrote:
> Personally, I would rather wait for a day far away when Python removes the
> LL(1) grammar restriction than introduce weird new syntax.  Maybe when the
> reference standard Python is written in Python and Python has a good parser
> module.  With backtracking, it should be fine and unambiguous.

Note that the fact that the parser is LL(1) is a design feature rather
than a limitation. Guido has certainly in the past indicated that it's
a deliberate choice. The idea is that LL(1) grammars are not only
easier for compilers to parse but also for users to understand.

So the day you're waiting for is indeed likely to be far away :-)
Paul

From mistersheik at gmail.com  Mon Feb 16 10:58:26 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Mon, 16 Feb 2015 01:58:26 -0800 (PST)
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CACac1F86kAL8ca=9-hGDoaZM7JfNARPzNYqH=yAtLyMPRSw8ig@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <54E12DBE.80909@mrabarnett.plus.com>
 <854mqmd9eo.fsf@benfinney.id.au> <mbrc4f$evu$1@ger.gmane.org>
 <mbrp6m$83d$1@ger.gmane.org> <54E180F2.2060209@canterbury.ac.nz>
 <71fbbfdc-f7cd-470c-89c4-bdaa6fe746d3@googlegroups.com>
 <CACac1F86kAL8ca=9-hGDoaZM7JfNARPzNYqH=yAtLyMPRSw8ig@mail.gmail.com>
Message-ID: <5d60c171-cb52-47f1-931b-6698b152bf21@googlegroups.com>

I agree with that idea in general, but I do think that a with statement 
having parentheses would not be confusing for users to parse even though it 
would not be LL(1).

On Monday, February 16, 2015 at 4:53:05 AM UTC-5, Paul Moore wrote:
>
> On 16 February 2015 at 05:40, Neil Girdhar <miste... at gmail.com 
> <javascript:>> wrote: 
> > Personally, I would rather wait for a day far away when Python removes 
> the 
> > LL(1) grammar restriction than introduce weird new syntax.  Maybe when 
> the 
> > reference standard Python is written in Python and Python has a good 
> parser 
> > module.  With backtracking, it should be fine and unambiguous. 
>
> Note that the fact that the parser is LL(1) is a design feature rather 
> than a limitation. Guido has certainly in the past indicated that it's 
> a deliberate choice. The idea is that LL(1) grammars are not only 
> easier for compilers to parse but also for users to understand. 
>
> So the day you're waiting for is indeed likely to be far away :-) 
> Paul 
> _______________________________________________ 
> Python-ideas mailing list 
> Python... at python.org <javascript:> 
> https://mail.python.org/mailman/listinfo/python-ideas 
> Code of Conduct: http://python.org/psf/codeofconduct/ 
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/e8ae3e81/attachment.html>

From python at mrabarnett.plus.com  Mon Feb 16 11:55:07 2015
From: python at mrabarnett.plus.com (MRAB)
Date: Mon, 16 Feb 2015 10:55:07 +0000
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CAA68w_n0jnh7=nOwSy+xUpsf1HpB+JgMrh51=WhN-ng8+URdsg@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <mbrdi4$3qk$1@ger.gmane.org>
 <CAA68w_=kKmdpm+c_m1kmgxmkarwUuP6fBJxu0cGWbXJ3qA1Dkg@mail.gmail.com>
 <CAA68w_n0jnh7=nOwSy+xUpsf1HpB+JgMrh51=WhN-ng8+URdsg@mail.gmail.com>
Message-ID: <54E1CC8B.4070808@mrabarnett.plus.com>

On 2015-02-16 01:33, Neil Girdhar wrote:
> Oh, hmm, I thought it would just be converting:
>
> with_stmt: 'with' with_item (',' with_item)*  ':' suite
>
> to
>
> with_stmt: 'with' ('(' with_item (',' with_item)* ')' | with_item (','
> with_item)*)  ':' suite
>
> but I guess that is ambiguous for a LL(1) parser.
>
> Well, maybe it's harder than I thought.  Someone better with grammars
> than me might know a nice way around this.
>
I think it might be possible, but it's not very nice.

Change this:

     expression_list ::=  expression ( "," expression )* [","]

to:

     expression_list ::=  expression ["as" target] ( "," expression 
["as" target] )* [","]

and then add checks to forbid the 'as' form where it's not acceptable.

For example, the 'if' statement is:

if_stmt ::=  "if" expression ":" suite
              ( "elif" expression ":" suite )*
              ["else" ":" suite]

so you'd scan the expression syntax trees and complain if any included
the 'as' form.


From jeanpierreda at gmail.com  Mon Feb 16 12:00:52 2015
From: jeanpierreda at gmail.com (Devin Jeanpierre)
Date: Mon, 16 Feb 2015 03:00:52 -0800
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
Message-ID: <CABicbJLHKZA93z-K7-Mqpz2p+hVODiH1ey9jqeYxKyEJJti2kw@mail.gmail.com>

I always say this, and it's already been shot down ("That way lies
Coffeescript"), but it's even simpler to just allow newlines in
between the "with" and the ":", the same way we already allow them
between parentheses.

Adding () arbitrarily around things that aren't expressions, just
because the lexer is special-cased for them, has always confused me.

-- Devin

On Sun, Feb 15, 2015 at 1:52 PM, Neil Girdhar <mistersheik at gmail.com> wrote:
> It's great that multiple context managers can be sent to "with":
>
> with a as b, c as d, e as f:
>      suite
>
> If the context mangers have a lot of text it's very hard to comply with PEP8
> without resorting to "\" continuations, which are proscribed by the Google
> style guide.
>
> Other statements like import and if support enclosing their arguments in
> parentheses to force aligned continuations.  Can we have the same for
> "with"?
>
> E.g.
>
> with (a as b,
>          c as d,
>          e as f):
>     suite
>
> This is a pretty minor change to the grammar and parser.
>
> Best,
>
> Neil
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From encukou at gmail.com  Mon Feb 16 13:39:40 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Mon, 16 Feb 2015 13:39:40 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
Message-ID: <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>

On Sun, Feb 15, 2015 at 11:30 PM, Martin Teichmann
<lkb.teichmann at gmail.com> wrote:
> Hi again,
>
> staring at the code for hours, I just realized that there is
> a very simple yet powerful solution to the metaclass merging
> problem. The changes to be made are so simple there isn't
> even a need to change the documentation!
>
> The current algorithm to find a proper metaclass looks for
> a class all other metaclasses are a subtype of. For that
> it uses PyType_IsSubtype. We could simply change that
> to PyObject_IsSubclass. This would give a great hook into
> the system: we just need to intercept the __subclasshook__,
> and we have a hook into the metaclass algorithm!
>
> Backwards compatibility should not be a problem: how many
> metaclasses are out there whose metaclass (so the
> meta-meta-class) overwrites the __subclasshook__?

Well, none, hopefully.

> I changed the code appropriately, and also added a library
> that uses this hook. It defines a class (called Dominant for
> technical reasons, I'm waiting for suggestions for a better
> name), which acts as a baseclass for metaclasses. All
> metaclasses inheriting from this baseclass are combined
> automatically (the algorithm doing that could still be improved,
> but it works).

So, let me get this right: You defined a meta-metaclass that
automatically creates a metaclass subclassing all metaclasses of a
newly defined class. Except subclassing doesn't really work that way,
so you also need to hook into subclass checking.
Sounds to me that the implementation is hard to explain.

You're taking a complex thing and layering another complex thing on
top, with the goal of preserving full generality (or at least with the
goal of making it possible to preserve full generality). You've run
into a point where some hand-waving seems necessary:
- This is a suitable baseclass for all simple metaclasses ? Can you
actually define "simple"?
- "the algorithm doing that could still be improved, but it works" ?
Works for what? What exactly does it do ? solve the problem of merging
any two metaclasses?
If you want to take this further (not that I recommend it), you'll at
least need some more precise wording here.


On the other hand, PEP 422 is not a replacement for metaclasses. It is
a *simpler way* to do some *subset* of what metaclasses do. I still
believe that is the way to go, even if the subset is initially smaller
than ideal.

> While I was already at it, I also added my variant of PEP 422,
> which fits already into this metaclass scheme.
>
> The code is at
> https://github.com/tecki/cpython/commits/metaclass-issubclass

Again, Python already has a metaclass that everyone is supposed to
use, namely `type`.
Adding another one can work for a backwards-compatibility shim on
PyPI, not for Python itself or its standard library.

From lkb.teichmann at gmail.com  Mon Feb 16 13:41:23 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Mon, 16 Feb 2015 13:41:23 +0100
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
Message-ID: <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>

Hi everyone,

just to toss in my two cents: I would add a completely different way
of line continuation. We could just say "if a line ends in an operator,
it is automatically continued (as if it was in parentheses).

That would allow things like like:

something_long = something_else +
   something_more

this would remove the need for many parentheses. It's also not a problem,
as at least until now, a line cannot legally end in an operator (except
when continued).

Well, there is one exception: a line may indeed end in a , (comma).
So I would add another rule: only if a line starts with any of
assert, from, import or with, comma continues a line.
I was thinking about how that could be a problem. A comma at
the end of a line currently means we're creating a tuple. So no
problem with from and import. Also no problem with with, as
it would have to be followed by a colon and a tuples in with
statements also doesn't make much sense. With assert,
this is interestingly already illegal, writing

   assert False, 3,

gives a syntax error (why, actually?)

All of what I propose can probably be done in the lexer already.

Those rules sound rather arbitrary, but well, once used I guess
people won't even notice they exist.

Greetings

Martin

From luciano at ramalho.org  Mon Feb 16 13:53:10 2015
From: luciano at ramalho.org (Luciano Ramalho)
Date: Mon, 16 Feb 2015 10:53:10 -0200
Subject: [Python-ideas] A send() built-in function to drive coroutines
Message-ID: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>

Coroutines are useless until primed by a next(my_coro) call. There are
decorators to solve this by returning a primed coroutine. Such
decorators add another problem: now the user may be unsure whether a
specific coroutine from a third-party API should be primed or not
prior to use.

How about having a built-in function named send() with the signature
send(coroutine, value) that does the right thing: sends a value to the
coroutine, priming it if necessary. So instead of writing this:

>>> next(my_coro)
>>> my_coro.send(a_value)

You could always write this:

>>> send(my_coro, a_value)

At a high level, the behavior of send() would be like this:

def send(coroutine, value):
    if inspect.getgeneratorstate() == 'GEN_CREATED':
        next(coroutine)
    coroutine.send(value)

Consider the above pseudo-code. Proper handling of other generator
states should be added.

This would make coroutines easier to use, and would make their basic
usage more consistent with the usage of plain generators using a
function syntax -- eg. next(my_gen) -- instead of method call syntax.

What do you think? If this has been discussed before, please send me a
link (I searched but came up empty).

Best,

Luciano



-- 
Luciano Ramalho
Twitter: @ramalhoorg

Professor em: http://python.pro.br
Twitter: @pythonprobr

From luciano at ramalho.org  Mon Feb 16 13:59:50 2015
From: luciano at ramalho.org (Luciano Ramalho)
Date: Mon, 16 Feb 2015 10:59:50 -0200
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
Message-ID: <CALxg4FUCk6Oghcuc=9GKa7xSvab_XLwW+nW0OBk1gYyLQq1zXA@mail.gmail.com>

On Mon, Feb 16, 2015 at 10:53 AM, Luciano Ramalho <luciano at ramalho.org> wrote:
> At a high level, the behavior of send() would be like this:
>
> def send(coroutine, value):
>     if inspect.getgeneratorstate() == 'GEN_CREATED':
>         next(coroutine)
>     coroutine.send(value)

In the pseudo-code above I forgot a return statement in the last line.
It should read:

def send(coroutine, value):
    if inspect.getgeneratorstate() == 'GEN_CREATED':
        next(coroutine)
    return coroutine.send(value)

Best,

Luciano


-- 
Luciano Ramalho
Twitter: @ramalhoorg

Professor em: http://python.pro.br
Twitter: @pythonprobr

From steve at pearwood.info  Mon Feb 16 14:31:59 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Tue, 17 Feb 2015 00:31:59 +1100
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
Message-ID: <20150216133158.GN2498@ando.pearwood.info>

On Mon, Feb 16, 2015 at 01:41:23PM +0100, Martin Teichmann wrote:
> Hi everyone,
> 
> just to toss in my two cents: I would add a completely different way
> of line continuation. We could just say "if a line ends in an operator,
> it is automatically continued (as if it was in parentheses).
> 
> That would allow things like like:
> 
> something_long = something_else +
>    something_more

Implicit line continuations? I think the Zen of Python has something to 
say about explicitness versus implicitness :-)

This would mean that the interpreter would fail to recognise certain 
syntax errors until the next line, or at all:

x = 123 +
mylist.append(None)

currently reports a syntax error in the right place at compile-time:

  x = 123 +
           ^
SyntaxError: invalid syntax


With your suggestion, we wouldn't get a syntax error at all, but a 
runtime TypeError:

TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'


On the other hand, this would give a rather surprising syntax error:

x = 123 +
import math

  import math
       ^
SyntaxError: invalid syntax



-- 
Steven

From rosuav at gmail.com  Mon Feb 16 14:47:35 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Tue, 17 Feb 2015 00:47:35 +1100
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <20150216133158.GN2498@ando.pearwood.info>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <20150216133158.GN2498@ando.pearwood.info>
Message-ID: <CAPTjJmqfX4uxd3=9U1daHRd_iU1aFbMj3ZOj4gqRAGtX0EOUtQ@mail.gmail.com>

On Tue, Feb 17, 2015 at 12:31 AM, Steven D'Aprano <steve at pearwood.info> wrote:
> Implicit line continuations? I think the Zen of Python has something to
> say about explicitness versus implicitness :-)

We already have implicit line continuations. Currently, you can break
a line any time the expression is incomplete - if any sort of bracket
is open. The proposal (which has come up plenty of other times) is to
deem an expression incomplete if there's a trailing binary operator.
(Incidentally, the definition depends crucially on there being no
unary operators which follow their arguments, and collide with binary
operators. But since there aren't many unary/binary collisions anyway
- the only ones I can think of are "+" and "-" - that's unlikely ever
to be an issue.) The problem isn't explicitness vs implicitness,
but...

> This would mean that the interpreter would fail to recognise certain
> syntax errors until the next line, or at all:
>
> x = 123 +
> mylist.append(None)
>
> currently reports a syntax error in the right place at compile-time:
>
>   x = 123 +
>            ^
> SyntaxError: invalid syntax

... the inability to adequately detect errors. So, here's an
alternative: When using this notation to continue an expression across
a line break, the following line MUST be indented further than the
current one. Again, this is what most people will do anyway, but it
means that your example would be invalid syntax still, instead of
being valid and wrong. It would be an indentation error (unexpected
indent) following a syntax error (binary operator with no second
operand).

That said, I'm not sure how this would be implemented, as I don't know
enough about the grammar. I can explain it to a human no trouble ("you
can continue an expression onto a new line by simply ending with an
operator and indenting the next line"), but it might be hard to
explain to a computer. For one thing, this version of the proposal
requires that indentation matter to the parsing of an expression,
which currently it doesn't - you can open a bracket, then put the rest
of the expression flush left, if you want to.

ChrisA

From p.f.moore at gmail.com  Mon Feb 16 15:13:03 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 16 Feb 2015 14:13:03 +0000
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CAPTjJmqfX4uxd3=9U1daHRd_iU1aFbMj3ZOj4gqRAGtX0EOUtQ@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <20150216133158.GN2498@ando.pearwood.info>
 <CAPTjJmqfX4uxd3=9U1daHRd_iU1aFbMj3ZOj4gqRAGtX0EOUtQ@mail.gmail.com>
Message-ID: <CACac1F-7YMEUNCGDfMjYDeXkwv9C2c_TbQ5_ZCCV5zDbdvt6DQ@mail.gmail.com>

On 16 February 2015 at 13:47, Chris Angelico <rosuav at gmail.com> wrote:
> That said, I'm not sure how this would be implemented, as I don't know
> enough about the grammar. I can explain it to a human no trouble ("you
> can continue an expression onto a new line by simply ending with an
> operator and indenting the next line"), but it might be hard to
> explain to a computer. For one thing, this version of the proposal
> requires that indentation matter to the parsing of an expression,
> which currently it doesn't - you can open a bracket, then put the rest
> of the expression flush left, if you want to.

It seems to me that we're talking here about replacing a slightly ugly
but explicit and already existing construct (backslash continuation)
with something that seems to be pretty difficult to agree on and/or
explain (depending on the proposal).

It might be worth remembering that the original post was asking for an
alternative to backslashes *because the Google guidelines prohibit
them*. Surely the obvious answer is to change the Google guidelines?

Paul

From rosuav at gmail.com  Mon Feb 16 15:14:40 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Tue, 17 Feb 2015 01:14:40 +1100
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CACac1F-7YMEUNCGDfMjYDeXkwv9C2c_TbQ5_ZCCV5zDbdvt6DQ@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <20150216133158.GN2498@ando.pearwood.info>
 <CAPTjJmqfX4uxd3=9U1daHRd_iU1aFbMj3ZOj4gqRAGtX0EOUtQ@mail.gmail.com>
 <CACac1F-7YMEUNCGDfMjYDeXkwv9C2c_TbQ5_ZCCV5zDbdvt6DQ@mail.gmail.com>
Message-ID: <CAPTjJmoP5P2kNwB3oEJdXouqvH50XRbFjGN1x1WYpYyhbmapwQ@mail.gmail.com>

On Tue, Feb 17, 2015 at 1:13 AM, Paul Moore <p.f.moore at gmail.com> wrote:
> It might be worth remembering that the original post was asking for an
> alternative to backslashes *because the Google guidelines prohibit
> them*. Surely the obvious answer is to change the Google guidelines?

Even more so: The obvious answer is to *ignore* the Google guidelines :)

ChrisA

From breamoreboy at yahoo.co.uk  Mon Feb 16 16:00:45 2015
From: breamoreboy at yahoo.co.uk (Mark Lawrence)
Date: Mon, 16 Feb 2015 15:00:45 +0000
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CAPTjJmoP5P2kNwB3oEJdXouqvH50XRbFjGN1x1WYpYyhbmapwQ@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <20150216133158.GN2498@ando.pearwood.info>
 <CAPTjJmqfX4uxd3=9U1daHRd_iU1aFbMj3ZOj4gqRAGtX0EOUtQ@mail.gmail.com>
 <CACac1F-7YMEUNCGDfMjYDeXkwv9C2c_TbQ5_ZCCV5zDbdvt6DQ@mail.gmail.com>
 <CAPTjJmoP5P2kNwB3oEJdXouqvH50XRbFjGN1x1WYpYyhbmapwQ@mail.gmail.com>
Message-ID: <mbt0n6$k4o$1@ger.gmane.org>

On 16/02/2015 14:14, Chris Angelico wrote:
> On Tue, Feb 17, 2015 at 1:13 AM, Paul Moore <p.f.moore at gmail.com> wrote:
>> It might be worth remembering that the original post was asking for an
>> alternative to backslashes *because the Google guidelines prohibit
>> them*. Surely the obvious answer is to change the Google guidelines?
>
> Even more so: The obvious answer is to *ignore* the Google guidelines :)
>
> ChrisA

In exactly the same way that I ignore PEP8?  We're only talking guides 
here, not cast in stone must do or else.

-- 
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence


From p.f.moore at gmail.com  Mon Feb 16 16:05:55 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 16 Feb 2015 15:05:55 +0000
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <mbt0n6$k4o$1@ger.gmane.org>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <20150216133158.GN2498@ando.pearwood.info>
 <CAPTjJmqfX4uxd3=9U1daHRd_iU1aFbMj3ZOj4gqRAGtX0EOUtQ@mail.gmail.com>
 <CACac1F-7YMEUNCGDfMjYDeXkwv9C2c_TbQ5_ZCCV5zDbdvt6DQ@mail.gmail.com>
 <CAPTjJmoP5P2kNwB3oEJdXouqvH50XRbFjGN1x1WYpYyhbmapwQ@mail.gmail.com>
 <mbt0n6$k4o$1@ger.gmane.org>
Message-ID: <CACac1F_oBBYocJQEE+pK9NnAhy3bRS5voEDhvf9uWJ4C+JyqEg@mail.gmail.com>

On 16 February 2015 at 15:00, Mark Lawrence <breamoreboy at yahoo.co.uk> wrote:
> On 16/02/2015 14:14, Chris Angelico wrote:
>>
>> On Tue, Feb 17, 2015 at 1:13 AM, Paul Moore <p.f.moore at gmail.com> wrote:
>>>
>>> It might be worth remembering that the original post was asking for an
>>> alternative to backslashes *because the Google guidelines prohibit
>>> them*. Surely the obvious answer is to change the Google guidelines?
>>
>> Even more so: The obvious answer is to *ignore* the Google guidelines :)
>
> In exactly the same way that I ignore PEP8?  We're only talking guides here,
> not cast in stone must do or else.

The point is that any proposal here to change Python's syntax probably
needs a better justification than "the existing solution is prohibited
by Google's guidelines" :-)

Paul

From breamoreboy at yahoo.co.uk  Mon Feb 16 16:12:50 2015
From: breamoreboy at yahoo.co.uk (Mark Lawrence)
Date: Mon, 16 Feb 2015 15:12:50 +0000
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CACac1F_oBBYocJQEE+pK9NnAhy3bRS5voEDhvf9uWJ4C+JyqEg@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <20150216133158.GN2498@ando.pearwood.info>
 <CAPTjJmqfX4uxd3=9U1daHRd_iU1aFbMj3ZOj4gqRAGtX0EOUtQ@mail.gmail.com>
 <CACac1F-7YMEUNCGDfMjYDeXkwv9C2c_TbQ5_ZCCV5zDbdvt6DQ@mail.gmail.com>
 <CAPTjJmoP5P2kNwB3oEJdXouqvH50XRbFjGN1x1WYpYyhbmapwQ@mail.gmail.com>
 <mbt0n6$k4o$1@ger.gmane.org>
 <CACac1F_oBBYocJQEE+pK9NnAhy3bRS5voEDhvf9uWJ4C+JyqEg@mail.gmail.com>
Message-ID: <mbt1dr$54v$1@ger.gmane.org>

On 16/02/2015 15:05, Paul Moore wrote:
> On 16 February 2015 at 15:00, Mark Lawrence <breamoreboy at yahoo.co.uk> wrote:
>> On 16/02/2015 14:14, Chris Angelico wrote:
>>>
>>> On Tue, Feb 17, 2015 at 1:13 AM, Paul Moore <p.f.moore at gmail.com> wrote:
>>>>
>>>> It might be worth remembering that the original post was asking for an
>>>> alternative to backslashes *because the Google guidelines prohibit
>>>> them*. Surely the obvious answer is to change the Google guidelines?
>>>
>>> Even more so: The obvious answer is to *ignore* the Google guidelines :)
>>
>> In exactly the same way that I ignore PEP8?  We're only talking guides here,
>> not cast in stone must do or else.
>
> The point is that any proposal here to change Python's syntax probably
> needs a better justification than "the existing solution is prohibited
> by Google's guidelines" :-)
>
> Paul
>

This has already been debated.  Unless somebody comes up with something 
new that is (presumably) relatively simple to understand and hence to 
implement it ain't gonna happen.  Why google's guidelines come into the 
picture I neither know nor care.

-- 
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence


From rosuav at gmail.com  Mon Feb 16 16:13:48 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Tue, 17 Feb 2015 02:13:48 +1100
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <mbt0n6$k4o$1@ger.gmane.org>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <20150216133158.GN2498@ando.pearwood.info>
 <CAPTjJmqfX4uxd3=9U1daHRd_iU1aFbMj3ZOj4gqRAGtX0EOUtQ@mail.gmail.com>
 <CACac1F-7YMEUNCGDfMjYDeXkwv9C2c_TbQ5_ZCCV5zDbdvt6DQ@mail.gmail.com>
 <CAPTjJmoP5P2kNwB3oEJdXouqvH50XRbFjGN1x1WYpYyhbmapwQ@mail.gmail.com>
 <mbt0n6$k4o$1@ger.gmane.org>
Message-ID: <CAPTjJmo8zsybjCxFcuxvJHZc1y6Xd0mRujD1AMwNwmE=NOmCkg@mail.gmail.com>

On Tue, Feb 17, 2015 at 2:00 AM, Mark Lawrence <breamoreboy at yahoo.co.uk> wrote:
> In exactly the same way that I ignore PEP8?  We're only talking guides here,
> not cast in stone must do or else.

PEP 8 specifically copes with this problem by saying that backslashes
are sometimes necessary. But yes, there are times when you should
ignore a style guide.

ChrisA

From lkb.teichmann at gmail.com  Mon Feb 16 17:19:54 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Mon, 16 Feb 2015 17:19:54 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
Message-ID: <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>

Hi list,

> Again, Python already has a metaclass that everyone is supposed to
> use, namely `type`.

well, there is a problem out here: people are indeed using other
metaclasses. Even the standard library, like ABC or enums.

And there is a problem here in python: once you do multiple inheritance,
and two classes have different metaclasses, there is no way of
programmatically determining an appropriate metaclass.

Sure, one can be of Thomas' opinion that doing so is a bad idea
anyways, but there are other people who think that writing and
combining metaclasses actually is thing one can do.

Especially for simple cases I do think that metaclasses are a good
idea and there should be a way to tell python how to combine two
metaclasses.

I am trying to solve this problem. What I posted was just one simple
idea how to solve it. Another one is the idea I posted earlier, the
idea was to add metaclasses using __add__. People didn't like the
__add__, so I implemented that with a new method called merge,
it's here: https://github.com/tecki/cpython/commits/metaclass-merge

Instead of yelling at me that my solutions are too complicated,
it would be very helpful and constructive to answer the following
questions:

- is it desirable to have a programmatic way to combine metaclasses?
- if yes, how should that be done?

My idea is that yes, it is desirable to have a programmatic way of
combining metaclasses, and I have now proposed two ways how to
do that.

Greetings

Martin

From ron3200 at gmail.com  Mon Feb 16 17:58:35 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Mon, 16 Feb 2015 11:58:35 -0500
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <54E180F2.2060209@canterbury.ac.nz>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <54E12DBE.80909@mrabarnett.plus.com> <854mqmd9eo.fsf@benfinney.id.au>
 <mbrc4f$evu$1@ger.gmane.org> <mbrp6m$83d$1@ger.gmane.org>
 <54E180F2.2060209@canterbury.ac.nz>
Message-ID: <mbt7js$jco$1@ger.gmane.org>



On 02/16/2015 12:32 AM, Greg Ewing wrote:
> I *think* this is unambiguous:
>
>     with a as b and c as d and e as f:
>        ...
>
> because the rule for a with statement is
>
> with_stmt: 'with' with_item (',' with_item)*  ':' suite
> with_item: test ['as' expr]
>
> and expr doesn't include 'and'.

That may be unambiguous to the parser, but it uses "and" in a new way.

     with (a as b) and (c as d) and (e as f):

The with would get only (e as f) if the "and"s were interpreted normally. 
That is if (a as b) didn't raise an exception or return 0 or False.



A possible way is to make "as" a valid expression on it's own.

     (expr as name)  <-->  ("name", expr)

So "as" can becomes a way to create (key, value) pairs.


And then change "with" to take a single (key, value) pair, or a tuple of 
(key, value) pairs, and then this will just work ... ;-)

    with (e1 as a,
          e2 as b,
          e3 as c):
        ...


The two issues here is how it would effect error messages for the "with" 
statement, and imports use of "as" would still be a bit special.


Cheers,
    Ron


























Cheers,
    Ron














From steve at pearwood.info  Mon Feb 16 18:19:58 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Tue, 17 Feb 2015 04:19:58 +1100
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CACac1F-7YMEUNCGDfMjYDeXkwv9C2c_TbQ5_ZCCV5zDbdvt6DQ@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <20150216133158.GN2498@ando.pearwood.info>
 <CAPTjJmqfX4uxd3=9U1daHRd_iU1aFbMj3ZOj4gqRAGtX0EOUtQ@mail.gmail.com>
 <CACac1F-7YMEUNCGDfMjYDeXkwv9C2c_TbQ5_ZCCV5zDbdvt6DQ@mail.gmail.com>
Message-ID: <20150216171958.GO2498@ando.pearwood.info>

On Mon, Feb 16, 2015 at 02:13:03PM +0000, Paul Moore wrote:
> On 16 February 2015 at 13:47, Chris Angelico <rosuav at gmail.com> wrote:
> > That said, I'm not sure how this would be implemented, as I don't know
> > enough about the grammar. I can explain it to a human no trouble ("you
> > can continue an expression onto a new line by simply ending with an
> > operator and indenting the next line"), but it might be hard to
> > explain to a computer. For one thing, this version of the proposal
> > requires that indentation matter to the parsing of an expression,
> > which currently it doesn't - you can open a bracket, then put the rest
> > of the expression flush left, if you want to.
> 
> It seems to me that we're talking here about replacing a slightly ugly
> but explicit and already existing construct (backslash continuation)

Ugly and fragile. Accidentally adding invisible whitespace after the \ 
breaks it.

(I've never quite understood why the parser doesn't just accept 
"backslash followed by zero or more tabs/spaces" instead of just 
backslash. But that's another question.)


> with something that seems to be pretty difficult to agree on and/or
> explain (depending on the proposal).
> 
> It might be worth remembering that the original post was asking for an
> alternative to backslashes *because the Google guidelines prohibit
> them*. Surely the obvious answer is to change the Google guidelines?

Allowing parens in with clauses has been requested many times, because 
none of the existing ways of dealing with long with statements are 
satisfactory. It's not just a "foolish consistency" (quoting PEP 8) with 
a style guide, but that using backslashes is *ugly and fragile* and 
other solutions are uglier:

with open("/tmp/a", "w") as a, open(
          "/tmp/b", "w") as b, open(
          "/tmp/c", "w") as c:
    pass


None of the existing solutions is really satisfactory, so it is not 
surprising that people keep asking to be able to write:

with (open("/tmp/a", "w") as a, 
      open("/tmp/b", "w") as b, 
      open("/tmp/c", "w") as c
      ):
    pass


Nothing to do with Google's style guide at all.



-- 
Steve

From Nikolaus at rath.org  Mon Feb 16 18:57:07 2015
From: Nikolaus at rath.org (Nikolaus Rath)
Date: Mon, 16 Feb 2015 09:57:07 -0800
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com> (Neil
 Girdhar's message of "Sun, 15 Feb 2015 13:52:42 -0800 (PST)")
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
Message-ID: <87k2zhpwng.fsf@thinkpad.rath.org>

On Feb 15 2015, Neil Girdhar <mistersheik-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> wrote:
> with (a as b,
>          c as d,
>          e as f):
>     suite

As a side note, you can also use ExitStack to avoid having to use line
continuations:

with ExitStack() as cb:
    b = cb.push(a)
    d = cb.push(c)
    f = cb.push(e)

Way more characters to type, but it looks better than \.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             ?Time flies like an arrow, fruit flies like a Banana.?

From luciano at ramalho.org  Mon Feb 16 19:03:04 2015
From: luciano at ramalho.org (Luciano Ramalho)
Date: Mon, 16 Feb 2015 16:03:04 -0200
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <CALxg4FUCk6Oghcuc=9GKa7xSvab_XLwW+nW0OBk1gYyLQq1zXA@mail.gmail.com>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
 <CALxg4FUCk6Oghcuc=9GKa7xSvab_XLwW+nW0OBk1gYyLQq1zXA@mail.gmail.com>
Message-ID: <CALxg4FXfL2uzurarFctd_Tdfz_4_6kuokDQQX7+zxytSGFkj4w@mail.gmail.com>

My previous pseudo-code lost the result of next(coroutine) -- which
may or may not be relevant.

I amended the send() code and wrote a few examples of it's use in a gist:

https://gist.github.com/ramalho/c1f7df10308a4bd67198

Now the user can decide whether or not to ignore the result of the
priming next() call.

Best,

Luciano


On Mon, Feb 16, 2015 at 10:59 AM, Luciano Ramalho <luciano at ramalho.org> wrote:
> On Mon, Feb 16, 2015 at 10:53 AM, Luciano Ramalho <luciano at ramalho.org> wrote:
>> At a high level, the behavior of send() would be like this:
>>
>> def send(coroutine, value):
>>     if inspect.getgeneratorstate() == 'GEN_CREATED':
>>         next(coroutine)
>>     coroutine.send(value)
>
> In the pseudo-code above I forgot a return statement in the last line.
> It should read:
>
> def send(coroutine, value):
>     if inspect.getgeneratorstate() == 'GEN_CREATED':
>         next(coroutine)
>     return coroutine.send(value)
>
> Best,
>
> Luciano
>
>
> --
> Luciano Ramalho
> Twitter: @ramalhoorg
>
> Professor em: http://python.pro.br
> Twitter: @pythonprobr



-- 
Luciano Ramalho
Twitter: @ramalhoorg

Professor em: http://python.pro.br
Twitter: @pythonprobr

From tjreedy at udel.edu  Mon Feb 16 21:00:16 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Mon, 16 Feb 2015 15:00:16 -0500
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <20150216011903.GL2498@ando.pearwood.info>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <mbrdi4$3qk$1@ger.gmane.org> <20150216011903.GL2498@ando.pearwood.info>
Message-ID: <mbti8h$ok7$1@ger.gmane.org>

On 2/15/2015 8:19 PM, Steven D'Aprano wrote:
> On Sun, Feb 15, 2015 at 07:27:47PM -0500, Terry Reedy wrote:
>> On 2/15/2015 4:52 PM, Neil Girdhar wrote:
>>> It's great that multiple context managers can be sent to "with":
>>>
>>> with a as b, c as d, e as f:
>>>       suite
>>>
>>> If the context mangers have a lot of text it's very hard to comply with
>>> PEP8 without resorting to "\" continuations, which are proscribed by the
>>> Google style guide.
> ...^^^^^^^^^^^^^^^^^^

Yes, I missed that.  Stylistically, the switch from PEP8 to Google is 
not nice.

>> Untrue.  " Backslashes may still be appropriate at times. For example,
>> long, multiple with -statements cannot use implicit continuation, so
>> backslashes are acceptable:
>>
>> with open('/path/to/some/file/you/want/to/read') as file_1, \
>>       open('/path/to/some/file/being/written', 'w') as file_2:
>>      file_2.write(file_1.read())"
>
> Isn't that a quote from PEP 8, rather than Google's style guide?

Yes
>
>>> Other statements like import and if support enclosing their arguments in
>>> parentheses to force aligned continuations.  Can we have the same for
>>> "with"?
>>
>> No. Considered and rejected because it would not be trivial.

'not trivial' is a very short summary of previous discussions, already 
alluded to by another responder.

> Many things are not trivial. Surely there should be a better reason than
> *just* "it is hard to do" to reject something?
>
> Do you have a reference for this being rejected?

My memory of discussions here or pydev. I believe the explicit PEP8 
quote came from those discussions.  Guido can confirm or deny when back 
from time off from python.

[snip]

> I don't know enough about Python's parser to do more than guess, but I
> guess that somebody may need to actually try extending the grammar to
> support this and see what happens?

That is certainly the way to disprove 'parser cannot handle this'.

-- 
Terry Jan Reedy


From mistersheik at gmail.com  Sun Feb 15 23:07:57 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Sun, 15 Feb 2015 14:07:57 -0800 (PST)
Subject: [Python-ideas] PEP 485: A Function for testing approximate
 equality
In-Reply-To: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
Message-ID: <f41ecfd2-743e-40e7-bdef-043ff37ac19e@googlegroups.com>

I'm +1 on the idea, +1 on the name, but -1 on symmetry.

I'm glad that you included "other approaches" (what people actually do in 
the field).  However, your argument that "is it that hard to just type it 
in" ? the whole point of this method is to give a self-commenting line that 
describes what the reader means.  Doesn't it make sense to say:  is *my 
value* close to *some true value*?  If you wanted a symmetric test in 
english, you might say "are these values close to each other"?

Best,

Neil

On Thursday, February 5, 2015 at 12:14:25 PM UTC-5, Chris Barker - NOAA 
Federal wrote:
>
> Hi folks,
>
> Time for Take 2 (or is that take 200?)
>
> No, I haven't dropped this ;-) After the lengthy thread, I went through 
> and tried to take into account all the thoughts and comments. The result is 
> a PEP with a bunch more explanation, and some slightly different decisions.
>
> TL;DR:
>
> Please take a look and express your approval or disapproval (+1, +0, -0, 
> -1, or whatever). Please keep in mind that this has been discussed a lot, 
> so try to not to re-iterate what's already been said. And I'd really like 
> it if you only gave -1, thumbs down, etc, if you really think this is worse 
> than not having anything at all, rather than you'd just prefer something a 
> little different. Also, if you do think this version is worse than nothing, 
> but a different choice or two could change that, then let us know what -- 
> maybe we can reach consensus on something slightly different.
>
> Also keep in mid that the goal here (in the words of Nick Coghlan):
> """
> the key requirement here should be "provide a binary float
> comparison function that is significantly less wrong than the current
> 'a == b'"
> """
> No need to split hairs.
>
> https://www.python.org/dev/peps/pep-0485/
>
> Full PEP below, and in gitHub (with example code, tests, etc.) here:
>
> https://github.com/PythonCHB/close_pep
>
> Full Detail:
> =========
>
> Here are the big changes:
>
>  * Symmetric test. Essentially, whether you want the symmetric or 
> asymmetric test depends on exactly what question you are asking, but I 
> think we need to pick one. It makes little to no difference for most use 
> cases (see below) , but I was persuaded by:
>
>   - Principle of least surprise -- equality is symmetric, shouldn't 
> "closeness" be too?
>   - People don't want to have to remember what order to put the arguments 
> --even if it doesn't matter, you have to think about whether it matters. A 
> symmetric test doesn't require that. 
>   - There are times when the order of comparison is not known -- for 
> example if the users wants to test a bunch of values all against each-other.
>
> On the other hand -- the asymmetric test is a better option only when you 
> are specifically asking the question: is this (computed) value within a 
> precise tolerance of another(known) value, and that tolerance is fairly 
> large (i.e 1% - 10%). While this was brought up on this thread, no one had 
> an actual use-case like that. And is it really that hard to write:
>
> known - computed <= 0.1* expected
>
>
> NOTE: that my example code has a flag with which you can pick which test 
> to use -- I did that for experimentation, but I think we want a simple API 
> here. As soon as you provide that option people need to concern themselves 
> with what to use.
>
> * Defaults: I set the default relative tolerance to 1e-9 -- because that 
> is a little more than half the precision available in a Python float, and 
> it's the largest tolerance for which ALL the possible tests result in 
> exactly the same result for all possible float values (see the PEP for the 
> math). The default for absolute tolerance is 0.0. Better for people to get 
> a failed test with a check for zero right away than accidentally use a 
> totally inappropriate value.
>
> Contentious Issues
> ===============
>
> * The Symmetric vs. Asymmetric thing -- some folks seemed to have string 
> ideas about that, but i hope we can all agree that something is better than 
> nothing, and symmetric is probably the least surprising. And take a look at 
> the math in teh PEP -- for small tolerance, which is the likely case for 
> most tests -- it literally makes no difference at all which test is used.
>
> * Default for abs_tol -- or even whether to have it or not. This is a 
> tough one -- there are a lot of use-cases for people testing against zero, 
> so it's a pity not to have it do something reasonable by default in that 
> case. However, there really is no reasonable universal default.
>
> As Guido said:
>  """
> For someone who for whatever reason is manipulating quantities that are in 
> the range of 1e-100,
> 1e-12 is about as large as infinity.
> """
> And testing against zero really requires a absolute tolerance test, which 
> is not hard to simply write:
>
> abs(my_val) <= some_tolerance
>
> and is covered by the unittest assert already.
>
> So why include an absolute_tolerance at all? -- Because this is likely to 
> be used inside a comprehension or in the a unittest sequence comparison, so 
> it is very helpful to be able to test a bunch of values some of which may 
> be zero, all with a single function with a single set of parameters. And a 
> similar approach has proven to be useful for numpy and the statistics test 
> module. And again, the default is abs_tol=0.0 -- if the user sticks with 
> defaults, they won't get bitten.
>
> * Also -- not really contentious (I hope) but I left the details out about 
> how the unittest assertion would work. That's because I don't really use 
> unittest -- I'm hoping someone else will write that. If it comes down to 
> it, I can do it -- it will probably look a lot like the existing sequence 
> comparing assertions.
>
> OK -- I think that's it.
>
> Let's see if we can drive this home.
>
> -Chris
>
>
> PEP: 485
> Title: A Function for testing approximate equality
> Version: $Revision$
> Last-Modified: $Date$
> Author: Christopher Barker <Chris.... at noaa.gov <javascript:>>
> Status: Draft
> Type: Standards Track
> Content-Type: text/x-rst
> Created: 20-Jan-2015
> Python-Version: 3.5
> Post-History:
>
>
> Abstract
> ========
>
> This PEP proposes the addition of a function to the standard library
> that determines whether one value is approximately equal or "close" to
> another value. It is also proposed that an assertion be added to the
> ``unittest.TestCase`` class to provide easy access for those using
> unittest for testing.
>
>
> Rationale
> =========
>
> Floating point values contain limited precision, which results in
> their being unable to exactly represent some values, and for error to
> accumulate with repeated computation.  As a result, it is common
> advice to only use an equality comparison in very specific situations.
> Often a inequality comparison fits the bill, but there are times
> (often in testing) where the programmer wants to determine whether a
> computed value is "close" to an expected value, without requiring them
> to be exactly equal. This is common enough, particularly in testing,
> and not always obvious how to do it, so it would be useful addition to
> the standard library.
>
>
> Existing Implementations
> ------------------------
>
> The standard library includes the ``unittest.TestCase.assertAlmostEqual``
> method, but it:
>
> * Is buried in the unittest.TestCase class
>
> * Is an assertion, so you can't use it as a general test at the command
>   line, etc. (easily)
>
> * Is an absolute difference test. Often the measure of difference
>   requires, particularly for floating point numbers, a relative error,
>   i.e "Are these two values within x% of each-other?", rather than an
>   absolute error. Particularly when the magnatude of the values is
>   unknown a priori.
>
> The numpy package has the ``allclose()`` and ``isclose()`` functions,
> but they are only available with numpy.
>
> The statistics package tests include an implementation, used for its
> unit tests.
>
> One can also find discussion and sample implementations on Stack
> Overflow and other help sites.
>
> Many other non-python systems provide such a test, including the Boost C++
> library and the APL language (reference?).
>
> These existing implementations indicate that this is a common need and
> not trivial to write oneself, making it a candidate for the standard
> library.
>
>
> Proposed Implementation
> =======================
>
> NOTE: this PEP is the result of an extended discussion on the
> python-ideas list [1]_.
>
> The new function will have the following signature::
>
>   is_close(a, b, rel_tolerance=1e-9, abs_tolerance=0.0)
>
> ``a`` and ``b``: are the two values to be tested to relative closeness
>
> ``rel_tolerance``: is the relative tolerance -- it is the amount of
> error allowed, relative to the magnitude a and b. For example, to set
> a tolerance of 5%, pass tol=0.05. The default tolerance is 1e-8, which
> assures that the two values are the same within about 8 decimal
> digits.
>
> ``abs_tolerance``: is an minimum absolute tolerance level -- useful for
> comparisons near zero.
>
> Modulo error checking, etc, the function will return the result of::
>
>   abs(a-b) <= max( rel_tolerance * min(abs(a), abs(b), abs_tolerance )
>
>
> Handling of non-finite numbers
> ------------------------------
>
> The IEEE 754 special values of NaN, inf, and -inf will be handled
> according to IEEE rules. Specifically, NaN is not considered close to
> any other value, including NaN. inf and -inf are only considered close
> to themselves.
>
>
> Non-float types
> ---------------
>
> The primary use-case is expected to be floating point numbers.
> However, users may want to compare other numeric types similarly. In
> theory, it should work for any type that supports ``abs()``,
> comparisons, and subtraction.  The code will be written and tested to
> accommodate these types:
>
> * ``Decimal``: for Decimal, the tolerance must be set to a Decimal type.
>
> * ``int``
>
> * ``Fraction``
>
> * ``complex``: for complex, ``abs(z)`` will be used for scaling and
>   comparison.
>
>
> Behavior near zero
> ------------------
>
> Relative comparison is problematic if either value is zero. By
> definition, no value is small relative to zero. And computationally,
> if either value is zero, the difference is the absolute value of the
> other value, and the computed absolute tolerance will be rel_tolerance
> times that value. rel-tolerance is always less than one, so the
> difference will never be less than the tolerance.
>
> However, while mathematically correct, there are many use cases where
> a user will need to know if a computed value is "close" to zero. This
> calls for an absolute tolerance test. If the user needs to call this
> function inside a loop or comprehension, where some, but not all, of
> the expected values may be zero, it is important that both a relative
> tolerance and absolute tolerance can be tested for with a single
> function with a single set of parameters.
>
> There is a similar issue if the two values to be compared straddle zero:
> if a is approximately equal to -b, then a and b will never be computed
> as "close".
>
> To handle this case, an optional parameter, ``abs_tolerance`` can be
> used to set a minimum tolerance used in the case of very small or zero
> computed absolute tolerance. That is, the values will be always be
> considered close if the difference between them is less than the
> abs_tolerance
>
> The default absolute tolerance value is set to zero because there is
> no value that is appropriate for the general case. It is impossible to
> know an appropriate value without knowing the likely values expected
> for a given use case. If all the values tested are on order of one,
> then a value of about 1e-8 might be appropriate, but that would be far
> too large if expected values are on order of 1e-12 or smaller.
>
> Any non-zero default might result in user's tests passing totally
> inappropriately. If, on the other hand a test against zero fails the
> first time with defaults, a user will be prompted to select an
> appropriate value for the problem at hand in order to get the test to
> pass.
>
> NOTE: that the author of this PEP has resolved to go back over many of
> his tests that use the numpy ``all_close()`` function, which provides
> a default abs_tolerance, and make sure that the default value is
> appropriate.
>
> If the user sets the rel_tolerance parameter to 0.0, then only the
> absolute tolerance will effect the result. While not the goal of the
> function, it does allow it to be used as a purely absolute tolerance
> check as well.
>
> unittest assertion
> -------------------
>
> [need text here]
>
> implementation
> --------------
>
> A sample implementation is available (as of Jan 22, 2015) on gitHub:
>
> https://github.com/PythonCHB/close_pep/blob/master
>
> This implementation has a flag that lets the user select which
> relative tolerance test to apply -- this PEP does not suggest that
> that be retained, but rather than the strong test be selected.
>
> Relative Difference
> ===================
>
> There are essentially two ways to think about how close two numbers
> are to each-other:
>
> Absolute difference: simply ``abs(a-b)``
>
> Relative difference: ``abs(a-b)/scale_factor`` [2]_.
>
> The absolute difference is trivial enough that this proposal focuses
> on the relative difference.
>
> Usually, the scale factor is some function of the values under
> consideration, for instance:
>
> 1) The absolute value of one of the input values
>
> 2) The maximum absolute value of the two
>
> 3) The minimum absolute value of the two.
>
> 4) The absolute value of the arithmetic mean of the two
>
> These lead to the following possibilities for determining if two
> values, a and b, are close to each other.
>
> 1) ``abs(a-b) <= tol*abs(a)``
>
> 2) ``abs(a-b) <= tol * max( abs(a), abs(b) )``
>
> 3) ``abs(a-b) <= tol * min( abs(a), abs(b) )``
>
> 4) ``abs(a-b) <= tol * (a + b)/2``
>
> NOTE: (2) and (3) can also be written as:
>
> 2) ``(abs(a-b) <= tol*abs(a)) or (abs(a-b) <= tol*abs(a))``
>
> 3) ``(abs(a-b) <= tol*abs(a)) and (abs(a-b) <= tol*abs(a))``
>
> (Boost refers to these as the "weak" and "strong" formulations [3]_)
> These can be a tiny bit more computationally efficient, and thus are
> used in the example code.
>
> Each of these formulations can lead to slightly different results.
> However, if the tolerance value is small, the differences are quite
> small. In fact, often less than available floating point precision.
>
> How much difference does it make?
> ---------------------------------
>
> When selecting a method to determine closeness, one might want to know
> how much  of a difference it could make to use one test or the other
> -- i.e. how many values are there (or what range of values) that will
> pass one test, but not the other.
>
> The largest difference is between options (2) and (3) where the
> allowable absolute difference is scaled by either the larger or
> smaller of the values.
>
> Define ``delta`` to be the difference between the allowable absolute
> tolerance defined by the larger value and that defined by the smaller
> value. That is, the amount that the two input values need to be
> different in order to get a different result from the two tests.
> ``tol`` is the relative tolerance value.
>
> Assume that ``a`` is the larger value and that both ``a`` and ``b``
> are positive, to make the analysis a bit easier. ``delta`` is
> therefore::
>
>   delta = tol * (a-b)
>
>
> or::
>
>   delta / tol = (a-b)
>
>
> The largest absolute difference that would pass the test: ``(a-b)``,
> equals the tolerance times the larger value::
>
>   (a-b) = tol * a
>
>
> Substituting into the expression for delta::
>
>   delta / tol = tol * a
>
>
> so::
>
>   delta = tol**2 * a
>
>
> For example, for ``a = 10``, ``b = 9``, ``tol = 0.1`` (10%):
>
> maximum tolerance ``tol * a == 0.1 * 10 == 1.0``
>
> minimum tolerance ``tol * b == 0.1 * 9.0 == 0.9``
>
> delta = ``(1.0 - 0.9) * 0.1 = 0.1`` or  ``tol**2 * a = 0.1**2 * 10 = .01``
>
> The absolute difference between the maximum and minimum tolerance
> tests in this case could be substantial. However, the primary use
> case for the proposed function is testing the results of computations.
> In that case a relative tolerance is likely to be selected of much
> smaller magnitude.
>
> For example, a relative tolerance of ``1e-8`` is about half the
> precision available in a python float. In that case, the difference
> between the two tests is ``1e-8**2 * a`` or ``1e-16 * a``, which is
> close to the limit of precision of a python float. If the relative
> tolerance is set to the proposed default of 1e-9 (or smaller), the
> difference between the two tests will be lost to the limits of
> precision of floating point. That is, each of the four methods will
> yield exactly the same results for all values of a and b.
>
> In addition, in common use, tolerances are defined to 1 significant
> figure -- that is, 1e-8 is specifying about 8 decimal digits of
> accuracy. So the difference between the various possible tests is well
> below the precision to which the tolerance is specified.
>
>
> Symmetry
> --------
>
> A relative comparison can be either symmetric or non-symmetric. For a
> symmetric algorithm:
>
> ``is_close_to(a,b)`` is always the same as ``is_close_to(b,a)``
>
> If a relative closeness test uses only one of the values (such as (1)
> above), then the result is asymmetric, i.e. is_close_to(a,b) is not
> necessarily the same as is_close_to(b,a).
>
> Which approach is most appropriate depends on what question is being
> asked. If the question is: "are these two numbers close to each
> other?", there is no obvious ordering, and a symmetric test is most
> appropriate.
>
> However, if the question is: "Is the computed value within x% of this
> known value?", then it is appropriate to scale the tolerance to the
> known value, and an asymmetric test is most appropriate.
>
> From the previous section, it is clear that either approach would
> yield the same or similar results in the common use cases. In that
> case, the goal of this proposal is to provide a function that is least
> likely to produce surprising results.
>
> The symmetric approach provide an appealing consistency -- it
> mirrors the symmetry of equality, and is less likely to confuse
> people. A symmetric test also relieves the user of the need to think
> about the order in which to set the arguments.  It was also pointed
> out that there may be some cases where the order of evaluation may not
> be well defined, for instance in the case of comparing a set of values
> all against each other.
>
> There may be cases when a user does need to know that a value is
> within a particular range of a known value. In that case, it is easy
> enough to simply write the test directly::
>
>   if a-b <= tol*a:
>
> (assuming a > b in this case). There is little need to provide a
> function for this particular case.
>
> This proposal uses a symmetric test.
>
> Which symmetric test?
> ---------------------
>
> There are three symmetric tests considered:
>
> The case that uses the arithmetic mean of the two values requires that
> the value be either added together before dividing by 2, which could
> result in extra overflow to inf for very large numbers, or require
> each value to be divided by two before being added together, which
> could result in underflow to -inf for very small numbers. This effect
> would only occur at the very limit of float values, but it was decided
> there as no benefit to the method worth reducing the range of
> functionality.
>
> This leaves the boost "weak" test (2)-- or using the smaller value to
> scale the tolerance, or the Boost "strong" (3) test, which uses the
> smaller of the values to scale the tolerance. For small tolerance,
> they yield the same result, but this proposal uses the boost "strong"
> test case: it is symmetric and provides a slightly stricter criteria
> for tolerance.
>
>
> Defaults
> ========
>
> Default values are required for the relative and absolute tolerance.
>
> Relative Tolerance Default
> --------------------------
>
> The relative tolerance required for two values to be considered
> "close" is entirely use-case dependent. Nevertheless, the relative
> tolerance needs to be less than 1.0, and greater than 1e-16
> (approximate precision of a python float). The value of 1e-9 was
> selected because it is the largest relative tolerance for which the
> various possible methods will yield the same result, and it is also
> about half of the precision available to a python float. In the
> general case, a good numerical algorithm is not expected to lose more
> than about half of available digits of accuracy, and if a much larger
> tolerance is acceptable, the user should be considering the proper
> value in that case. Thus 1-e9 is expected to "just work" for many
> cases.
>
> Absolute tolerance default
> --------------------------
>
> The absolute tolerance value will be used primarily for comparing to
> zero. The absolute tolerance required to determine if a value is
> "close" to zero is entirely use-case dependent. There is also
> essentially no bounds to the useful range -- expected values would
> conceivably be anywhere within the limits of a python float.  Thus a
> default of 0.0 is selected.
>
> If, for a given use case, a user needs to compare to zero, the test
> will be guaranteed to fail the first time, and the user can select an
> appropriate value.
>
> It was suggested that comparing to zero is, in fact, a common use case
> (evidence suggest that the numpy functions are often used with zero).
> In this case, it would be desirable to have a "useful" default. Values
> around 1-e8 were suggested, being about half of floating point
> precision for values of around value 1.
>
> However, to quote The Zen: "In the face of ambiguity, refuse the
> temptation to guess." Guessing that users will most often be concerned
> with values close to 1.0 would lead to spurious passing tests when used
> with smaller values -- this is potentially more damaging than
> requiring the user to thoughtfully select an appropriate value.
>
>
> Expected Uses
> =============
>
> The primary expected use case is various forms of testing -- "are the
> results computed near what I expect as a result?" This sort of test
> may or may not be part of a formal unit testing suite. Such testing
> could be used one-off at the command line, in an iPython notebook,
> part of doctests, or simple assets in an ``if __name__ == "__main__"``
> block.
>
> The proposed unitest.TestCase assertion would have course be used in
> unit testing.
>
> It would also be an appropriate function to use for the termination
> criteria for a simple iterative solution to an implicit function::
>
>     guess = something
>     while True:
>         new_guess = implicit_function(guess, *args)
>         if is_close(new_guess, guess):
>             break
>         guess = new_guess
>
>
> Inappropriate uses
> ------------------
>
> One use case for floating point comparison is testing the accuracy of
> a numerical algorithm. However, in this case, the numerical analyst
> ideally would be doing careful error propagation analysis, and should
> understand exactly what to test for. It is also likely that ULP (Unit
> in the Last Place) comparison may be called for. While this function
> may prove useful in such situations, It is not intended to be used in
> that way.
>
>
> Other Approaches
> ================
>
> ``unittest.TestCase.assertAlmostEqual``
> ---------------------------------------
>
> (
> https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertAlmostEqual
> )
>
> Tests that values are approximately (or not approximately) equal by
> computing the difference, rounding to the given number of decimal
> places (default 7), and comparing to zero.
>
> This method is purely an absolute tolerance test, and does not address
> the need for a relative tolerance test.
>
> numpy ``is_close()``
> --------------------
>
> http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.isclose.html
>
> The numpy package provides the vectorized functions is_close() and
> all_close, for similar use cases as this proposal:
>
> ``isclose(a, b, rtol=1e-05, atol=1e-08, equal_nan=False)``
>
>       Returns a boolean array where two arrays are element-wise equal
>       within a tolerance.
>
>       The tolerance values are positive, typically very small numbers.
>       The relative difference (rtol * abs(b)) and the absolute
>       difference atol are added together to compare against the
>       absolute difference between a and b
>
> In this approach, the absolute and relative tolerance are added
> together, rather than the ``or`` method used in this proposal. This is
> computationally more simple, and if relative tolerance is larger than
> the absolute tolerance, then the addition will have no effect. However,
> if the absolute and relative tolerances are of similar magnitude, then
> the allowed difference will be about twice as large as expected.
>
> This makes the function harder to understand, with no computational
> advantage in this context.
>
> Even more critically, if the values passed in are small compared to
> the absolute  tolerance, then the relative tolerance will be
> completely swamped, perhaps unexpectedly.
>
> This is why, in this proposal, the absolute tolerance defaults to zero
> -- the user will be required to choose a value appropriate for the
> values at hand.
>
>
> Boost floating-point comparison
> -------------------------------
>
> The Boost project ( [3]_ ) provides a floating point comparison
> function. Is is a symmetric approach, with both "weak" (larger of the
> two relative errors) and "strong" (smaller of the two relative errors)
> options. This proposal uses the Boost "strong" approach. There is no
> need to complicate the API by providing the option to select different
> methods when the results will be similar in most cases, and the user
> is unlikely to know which to select in any case.
>
>
> Alternate Proposals
> -------------------
>
>
> A Recipe
> '''''''''
>
> The primary alternate proposal was to not provide a standard library
> function at all, but rather, provide a recipe for users to refer to.
> This would have the advantage that the recipe could provide and
> explain the various options, and let the user select that which is
> most appropriate. However, that would require anyone needing such a
> test to, at the very least, copy the function into their code base,
> and select the comparison method to use.
>
> In addition, adding the function to the standard library allows it to
> be used in the ``unittest.TestCase.assertIsClose()`` method, providing
> a substantial convenience to those using unittest.
>
>
> ``zero_tol``
> ''''''''''''
>
> One possibility was to provide a zero tolerance parameter, rather than
> the absolute tolerance parameter. This would be an absolute tolerance
> that would only be applied in the case of one of the arguments being
> exactly zero. This would have the advantage of retaining the full
> relative tolerance behavior for all non-zero values, while allowing
> tests against zero to work. However, it would also result in the
> potentially surprising result that a small value could be "close" to
> zero, but not "close" to an even smaller value. e.g., 1e-10 is "close"
> to zero, but not "close" to 1e-11.
>
>
> No absolute tolerance
> '''''''''''''''''''''
>
> Given the issues with comparing to zero, another possibility would
> have been to only provide a relative tolerance, and let every
> comparison to zero fail. In this case, the user would need to do a
> simple absolute test: `abs(val) < zero_tol` in the case where the
> comparison involved zero.
>
> However, this would not allow the same call to be used for a sequence
> of values, such as in a loop or comprehension, or in the
> ``TestCase.assertClose()`` method. Making the function far less
> useful. It is noted that the default abs_tolerance=0.0 achieves the
> same effect if the default is not overidden.
>
> Other tests
> ''''''''''''
>
> The other tests considered are all discussed in the Relative Error
> section above.
>
>
> References
> ==========
>
> .. [1] Python-ideas list discussion threads
>
>    https://mail.python.org/pipermail/python-ideas/2015-January/030947.html
>
>    https://mail.python.org/pipermail/python-ideas/2015-January/031124.html
>
>    https://mail.python.org/pipermail/python-ideas/2015-January/031313.html
>
> .. [2] Wikipedia page on relative difference
>
>    http://en.wikipedia.org/wiki/Relative_change_and_difference
>
> .. [3] Boost project floating-point comparison algorithms
>
>    
> http://www.boost.org/doc/libs/1_35_0/libs/test/doc/components/test_tools/floating_point_comparison.html
>
> .. Bruce Dawson's discussion of floating point.
>
>    
> https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/
>
>
> Copyright
> =========
>
> This document has been placed in the public domain.
>
>
> ..
>    Local Variables:
>    mode: indented-text
>    indent-tabs-mode: nil
>    sentence-end-double-space: t
>    fill-column: 70
>    coding: utf-8
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> -- 
>
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R            (206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115       (206) 526-6317   main reception
>
> Chris.... at noaa.gov <javascript:>
>  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150215/3d73cca2/attachment-0001.html>

From greg.ewing at canterbury.ac.nz  Mon Feb 16 22:30:59 2015
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Tue, 17 Feb 2015 10:30:59 +1300
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <20150216171958.GO2498@ando.pearwood.info>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <20150216133158.GN2498@ando.pearwood.info>
 <CAPTjJmqfX4uxd3=9U1daHRd_iU1aFbMj3ZOj4gqRAGtX0EOUtQ@mail.gmail.com>
 <CACac1F-7YMEUNCGDfMjYDeXkwv9C2c_TbQ5_ZCCV5zDbdvt6DQ@mail.gmail.com>
 <20150216171958.GO2498@ando.pearwood.info>
Message-ID: <54E26193.70405@canterbury.ac.nz>

How about adopting a version of the dangling-operator
rule suggested earlier, but *only* for commas?

I think it could be handled entirely in the lexer: if
you sees a comma followed by a newline, and the next
line is indented further than the current indentation
level, absorb the newline and indentation.

This would take care of all the statements I can think
of that can't currently be parenthesised.

-- 
Greg


From encukou at gmail.com  Mon Feb 16 22:49:07 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Mon, 16 Feb 2015 22:49:07 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAK9R32RBN7e9H8AJ_3_SBRqB1aUpOtZWFWDK_wx2J3wB8=gh5g@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RBN7e9H8AJ_3_SBRqB1aUpOtZWFWDK_wx2J3wB8=gh5g@mail.gmail.com>
Message-ID: <CA+=+wqABN7z=GA+1dOF-pHFT5eGP6hRXkPfq_iTOq5d2Ry73tg@mail.gmail.com>

On Mon, Feb 16, 2015 at 5:16 PM, Martin Teichmann
<martin.teichmann at gmail.com> wrote:
> Hi list,
>
>> Again, Python already has a metaclass that everyone is supposed to
>> use, namely `type`.
>
> well, there is a problem out here: people are indeed using other
> metaclasses. Even the standard library, like ABC or enums.
>
> And there is a problem here in python: once you do multiple inheritance,
> and two classes have different metaclasses, there is no way of
> programmatically determining an appropriate metaclass.
>
> Sure, one can be of Thomas' opinion that doing so is a bad idea
> anyways, but there are other people who think that writing and
> combining metaclasses actually is thing one can do.
>
> Especially for simple cases I do think that metaclasses are a good
> idea and there should be a way to tell python how to combine two
> metaclasses.
>
> I am trying to solve this problem. What I posted was just one simple
> idea how to solve it. Another one is the idea I posted earlier, the
> idea was to add metaclasses using __add__. People didn't like the
> __add__, so I implemented that with a new method called merge,
> it's here: https://github.com/tecki/cpython/commits/metaclass-merge
>
> Instead of yelling at me that my solutions are too complicated,

Hello,
I am very sorry that my post came across as yelling. I had no
intention of that, though on re-reading it is rather harsh.
I will try to be less confrontational in the future.

The meta-metaclass approach is no doubt ingenious, but most people
struggle with the concept of metaclasses, so building a more complex
thing on top is quite tricky. And while I'd like to be proven wrong, I
think fleshing out the details would complicate things even further.

> it would be very helpful and constructive to answer the following
> questions:
>
> - is it desirable to have a programmatic way to combine metaclasses?

That's not a simple yes/no answer.
If you are asking if there should be a generic way to merge any two
metaclasses, then the answer is "no": metaclasses are too powerful to
merge two of them safely without knowing details about both.
If you are asking about merging two metaclasses when knowing all the
details, then the answer is "yes" ? but there is already a way to do
this, namely a new metaclass deriving from both.
The discussion is about whether it's desirable to do this subclassing
automatically, so that one of the metaclasses describes how it is to
be combined with the other.
And my answer to that is: no, I don't think it's worth the extra complexity.
If you truly need the full power of metaclasses, then to combine any
two of them you need to know all the details; it is better to create
the subclass explicitly.
For simple cases ? and here we seem to disagree ? I think metaclasses
*are* a bad idea. This is why I like PEP 422: taking the features that
*can* be safely automatically combined, and letting you combine them
through super()-style cooperative inheritance ? which, while still far
from trivial, is far more approachable than metaclasses.
More generally, I think simple things should be built on simple
things, not on complex things with more code added on top.

> - if yes, how should that be done?
>
> My idea is that yes, it is desirable to have a programmatic way of
> combining metaclasses, and I have now proposed two ways how to
> do that.

I still think your other effort ? https://github.com/tecki/metaclasses
? is a good solution, provided PEP 422 makes it into Python itself so
that this code is used as a backwards compatibility shim.
Forgive my lack of comments on that: I think that at this point, it's
time to look for and iron out subtle issues in the
idea/implementation, and I've not had time to study them in enough
detail yet. I should have expressed more enthusiasm, rather than
waiting until I was sure I couldn't find obvious flaws. So here it is:
I think PEP 422, and your Github repo, are definitely headed in the
right direction.

Meanwhile I shouted it down the meta-metaclass idea as soon as I saw
it's heading the wrong direction ? towards complicating things. I am
too biased towards negativity. Please accept my apology.

Also, a disclaimer: I'm not a Python core developer. Pay much more
attention to Nick's words than mine.

From bryan0731 at gmail.com  Mon Feb 16 22:55:07 2015
From: bryan0731 at gmail.com (Bryan Duarte)
Date: Mon, 16 Feb 2015 14:55:07 -0700
Subject: [Python-ideas] Accessible IDE
Message-ID: <35CE8F26-4970-438C-A8B9-E1429252A341@gmail.com>

Hello all,

I was wondering if anyone had a suggestion on an accessible IDE for Python using Voiceover on the Mac. I use a Mac with Voiceover to interact with my computer, but can also use Linux or Windows if necessary. I would really like to find a tool that will allow me to use Python with some of the features of an IDE. I have tried:
? PyCharm
? iPy
? iPy Notebook
? Idel
? and Eclipse

So far Eclipse worked the best but still caused problems. Any suggestions and/or help would be great.

From abarnert at yahoo.com  Mon Feb 16 23:09:58 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Mon, 16 Feb 2015 14:09:58 -0800
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
Message-ID: <8B01E171-2251-4226-A9B7-4E6A0C3C65CF@yahoo.com>

On Feb 16, 2015, at 4:41, Martin Teichmann <lkb.teichmann at gmail.com> wrote:

> Hi everyone,
> 
> just to toss in my two cents: I would add a completely different way
> of line continuation. We could just say "if a line ends in an operator,
> it is automatically continued (as if it was in parentheses).
> 
> That would allow things like like:
> 
> something_long = something_else +
>   something_more
> 
> this would remove the need for many parentheses. It's also not a problem,
> as at least until now, a line cannot legally end in an operator (except
> when continued).

In addition to the other problems mentioned, comma is not an operator in Python, it's a delimiter, like a dot, semicolon, or parenthesis. It doesn't have a precedence or meaning of its own; it's used in multiple different expression and statement rules in similar but independent ways. (The docs do loosely refer to its use within an expression list as an operator at least once, but it's not parsed that way; an expression list has commas as part of its syntax.)

I realize that in C and its descendants, comma _is_ an operator, which is ambiguous with the use of commas in calls, declarations, etc. This is one of a half-dozen reasons that C can only be parsed with a powerful algorithm like GLR, or by adding a bunch of special-case hacks to a weaker parser. I don't think Python would benefit from becoming as hard to parse as C.

> Well, there is one exception: a line may indeed end in a , (comma).

Some lines may end in an expression list, and an expression that ends in a trailing comma. I don't think any other uses of comma can appear at the end of a line.

> So I would add another rule: only if a line starts with any of
> assert, from, import or with, comma continues a line.

What if it starts with an indent or dedent followed by one of those keywords?

> I was thinking about how that could be a problem. A comma at
> the end of a line currently means we're creating a tuple. So no
> problem with from and import. Also no problem with with, as
> it would have to be followed by a colon and a tuples in with
> statements also doesn't make much sense. With assert,
> this is interestingly already illegal, writing
> 
>   assert False, 3,
> 
> gives a syntax error (why, actually?)

Because the assert statement takes either a test expression, or a test expression and a message expression separated by a comma. Not an expression list. So there's nothing that trailing comma could plausibly be parsed as part of.

Note that with your change, the assert would now suck up the next line and try to parse something like "3, print(foo)" as an expression and give a confusing error message telling you that print(foo) is invalid syntax. People already have enough trouble learning how to deal with those errors from unbalanced parens; making them appear all over the place implicitly would make Python a lot harder for novices.

> All of what I propose can probably be done in the lexer already.
> 
> Those rules sound rather arbitrary, but well, once used I guess
> people won't even notice they exist.
> 
> Greetings
> 
> Martin
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From mistersheik at gmail.com  Mon Feb 16 23:18:36 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Mon, 16 Feb 2015 17:18:36 -0500
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <8B01E171-2251-4226-A9B7-4E6A0C3C65CF@yahoo.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <8B01E171-2251-4226-A9B7-4E6A0C3C65CF@yahoo.com>
Message-ID: <CAA68w_mJgKi3U+w4d90M-ow=gBjqcUoJOdOrhyu8NCZaCOr0yQ@mail.gmail.com>

I think the most consistent way to do this if we would do it is to support
parentheses around the "with statement".  We now know that though it's not
"trivial", it's not too hard to implement it in the way that Andrew
suggests.  I'd consider doing it if there were an approved PEP and a
guaranteed fast review.

Best,

Neil
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/51b2f301/attachment-0001.html>

From abarnert at yahoo.com  Mon Feb 16 23:27:11 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Mon, 16 Feb 2015 14:27:11 -0800
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
Message-ID: <D7D9F7F3-0569-476D-86AD-D19DAF423C8E@yahoo.com>

On Feb 16, 2015, at 8:19, Martin Teichmann <lkb.teichmann at gmail.com> wrote:

> Hi list,
> 
>> Again, Python already has a metaclass that everyone is supposed to
>> use, namely `type`.
> 
> well, there is a problem out here: people are indeed using other
> metaclasses. Even the standard library, like ABC or enums.

But almost all such metaclasses are instances (and direct subclasses, but that's not as important here) of type; I think he should have said "a type that everyone can usually use as a metaclass, and can at least pretty much always use as a metametaclass when that isn't appropriate".

Anyway, for the problem you're trying to solve: there _is_ a real problem that there are a handful of popular metaclasses (in the stdlib, in PyQt, maybe elsewhere) that people other than the implementors often need to combine with other metaclasses, and that's hard, and unless PEP 422 makes all those metaclasses unnecessary, you still need a solution, right?

Maybe the answer is not a generic way of combining any two metaclasses, but some way for the author of a metaclass to make it more easily combinable. Whether that's a decorator, a metametaclass, or something even more complicated, I think you'd have an easier time convincing people of its utility: if you're writing a metaclass like Qt's, use this and other people can subclass your metaclass just by doing X instead of having to do Y. The fact that it also happens to work in a wider variety of cases is nice, but not the selling point. (But it can hardly be used as an argument against your design--any decorator can be used to monkey patch a function or class after the fact; any ABC can be registered after the fact; etc. That's a normal part of Python.) However, if it can be used to more easily backport PEP 422 code, that _is_ a selling point, of course.

> And there is a problem here in python: once you do multiple inheritance,
> and two classes have different metaclasses, there is no way of
> programmatically determining an appropriate metaclass.
> 
> Sure, one can be of Thomas' opinion that doing so is a bad idea
> anyways, but there are other people who think that writing and
> combining metaclasses actually is thing one can do.
> 
> Especially for simple cases I do think that metaclasses are a good
> idea and there should be a way to tell python how to combine two
> metaclasses.
> 
> I am trying to solve this problem. What I posted was just one simple
> idea how to solve it. Another one is the idea I posted earlier, the
> idea was to add metaclasses using __add__. People didn't like the
> __add__, so I implemented that with a new method called merge,
> it's here: https://github.com/tecki/cpython/commits/metaclass-merge
> 
> Instead of yelling at me that my solutions are too complicated,
> it would be very helpful and constructive to answer the following
> questions:
> 
> - is it desirable to have a programmatic way to combine metaclasses?
> - if yes, how should that be done?
> 
> My idea is that yes, it is desirable to have a programmatic way of
> combining metaclasses, and I have now proposed two ways how to
> do that.
> 
> Greetings
> 
> Martin
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From guido at python.org  Mon Feb 16 23:30:04 2015
From: guido at python.org (Guido van Rossum)
Date: Mon, 16 Feb 2015 14:30:04 -0800
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
Message-ID: <CAP7+vJ+m=ENutvvNxUdTSF4z_CGmep4-JRVuQ6a0p7_EAnXaxA@mail.gmail.com>

This is brought up from time to time. I believe it is based on a
misunderstanding of coroutines.

On Mon, Feb 16, 2015 at 4:53 AM, Luciano Ramalho <luciano at ramalho.org>
wrote:

> Coroutines are useless until primed by a next(my_coro) call. There are
> decorators to solve this by returning a primed coroutine. Such
> decorators add another problem: now the user may be unsure whether a
> specific coroutine from a third-party API should be primed or not
> prior to use.
>
> How about having a built-in function named send() with the signature
> send(coroutine, value) that does the right thing: sends a value to the
> coroutine, priming it if necessary. So instead of writing this:
>
> >>> next(my_coro)
> >>> my_coro.send(a_value)
>
> You could always write this:
>
> >>> send(my_coro, a_value)
>
> At a high level, the behavior of send() would be like this:
>
> def send(coroutine, value):
>     if inspect.getgeneratorstate() == 'GEN_CREATED':
>         next(coroutine)
>     coroutine.send(value)
>
> Consider the above pseudo-code. Proper handling of other generator
> states should be added.
>
> This would make coroutines easier to use, and would make their basic
> usage more consistent with the usage of plain generators using a
> function syntax -- eg. next(my_gen) -- instead of method call syntax.
>
> What do you think? If this has been discussed before, please send me a
> link (I searched but came up empty).
>
> Best,
>
> Luciano
>
>
>
> --
> Luciano Ramalho
> Twitter: @ramalhoorg
>
> Professor em: http://python.pro.br
> Twitter: @pythonprobr
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/735b5d32/attachment.html>

From ethan at stoneleaf.us  Mon Feb 16 23:32:06 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Mon, 16 Feb 2015 14:32:06 -0800
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CAA68w_mJgKi3U+w4d90M-ow=gBjqcUoJOdOrhyu8NCZaCOr0yQ@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <8B01E171-2251-4226-A9B7-4E6A0C3C65CF@yahoo.com>
 <CAA68w_mJgKi3U+w4d90M-ow=gBjqcUoJOdOrhyu8NCZaCOr0yQ@mail.gmail.com>
Message-ID: <54E26FE6.6030906@stoneleaf.us>

On 02/16/2015 02:18 PM, Neil Girdhar wrote:
>
> I think the most consistent way to do this if we would do it is to support parentheses
> around the "with statement".  We now know that though it's not "trivial", it's not too
> hard to implement it in the way that Andrew suggests.  I'd consider doing it if there
> were an approved PEP and a guaranteed fast review.

There are no guarantees.  There is also currently no PEP (that I'm aware of).

If you're willing to put the work in, start with a PEP -- but it's unlikely to be accepted without at least a working
proof-of-concept implementation.  It may still get rejected in the end, but we would then have a very good record of why
-- which can also be extremely useful.

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/5293f921/attachment.sig>

From rymg19 at gmail.com  Mon Feb 16 23:57:45 2015
From: rymg19 at gmail.com (Ryan Gonzalez)
Date: Mon, 16 Feb 2015 16:57:45 -0600
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CAMpsgwbCeHW8eHA+EidNDVB9tE7EKFJESBEocy2jK2okwaR6pA@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
 <CAEbHw4bJOz1w82=Nt3ZU=3o4h-fx9FKzE+dfS4ng=N2-rNLoKQ@mail.gmail.com>
 <CALGmxE+E0O5y4J-NqEQomt-THEVWoPcnJSXrRXJrn-sN6XQKRA@mail.gmail.com>
 <CAMpsgwbCeHW8eHA+EidNDVB9tE7EKFJESBEocy2jK2okwaR6pA@mail.gmail.com>
Message-ID: <CAO41-mNrXsp0xp1W=v=RbPA6Kbmc1VrqDN3qe9XGkq8yVBEmTw@mail.gmail.com>

Doesn't Python already have this with unittest.TestCase.assertAlmostEqual
<https://docs.python.org/3.4/library/unittest.html#unittest.TestCase.assertAlmostEqual>
?


On Sat, Feb 14, 2015 at 7:26 AM, Victor Stinner <victor.stinner at gmail.com>
wrote:

> Hi,
>
> Do you propose a builtin function? I would prefer to put it in the math
> module.
>
> You may add a new method to unittest.TestCase? It would show all
> parameters on error. .assertTrue(isclose(...)) doesn't help on error.
>
> Victor
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
Ryan
If anybody ever asks me why I prefer C++ to C, my answer will be simple:
"It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was
nul-terminated."
Personal reality distortion fields are immune to contradictory evidence. -
srean
Check out my website: http://kirbyfan64.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/5f50e2a0/attachment-0001.html>

From rymg19 at gmail.com  Tue Feb 17 00:10:48 2015
From: rymg19 at gmail.com (Ryan Gonzalez)
Date: Mon, 16 Feb 2015 17:10:48 -0600
Subject: [Python-ideas] Adding ziplib
Message-ID: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>

Python has support for several compression methods, which is great...sort
of.

Recently, I had to port an application that used zips to use tar+gzip. It
*sucked*. Why? Inconsistency.

For instance, zipfile uses write to add files, tar uses add. Then, tar has
addfile (which confusingly does not add a file), and zip has writestr, for
which a tar alternative has been proposed on the bug tracker
<http://bugs.python.org/issue22208> but hasn't had any activity for a while.

Then, zipfile and tarfile's extract methods have slightly different
arguments. Zipinfo has namelist, and tarfile as getnames. It's all a mess.

The idea is to reorganize Python's zip support just like the sha and md5
modules were put into the hashlib module. Name? ziplib.

It would have a few ABCs:

- an ArchiveFile class that ZipFile and TarFile would derive from.
- a BasicArchiveFile class that ArchiveFile extends. bz2, lzma, and (maybe)
gzip would be derived from this one.
- ArchiveFileInfo, which ZipInfo and TarInfo would derive from. ArchiveFile
and ArchiveFileInfo go hand-in-hand.

Now, the APIs would be more consistent. Of course, some have methods that
others don't have, but the usage would be easier. zlib would be completely
separate, since it has its own API and all.

I'm not quite sure how gzip fits into this, since it has a very minimal API.

Thoughts?

-- 
Ryan
If anybody ever asks me why I prefer C++ to C, my answer will be simple:
"It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was
nul-terminated."
Personal reality distortion fields are immune to contradictory evidence. -
srean
Check out my website: http://kirbyfan64.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/e33cf25e/attachment.html>

From abarnert at yahoo.com  Tue Feb 17 00:17:39 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Mon, 16 Feb 2015 15:17:39 -0800
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CAA68w_mJgKi3U+w4d90M-ow=gBjqcUoJOdOrhyu8NCZaCOr0yQ@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <8B01E171-2251-4226-A9B7-4E6A0C3C65CF@yahoo.com>
 <CAA68w_mJgKi3U+w4d90M-ow=gBjqcUoJOdOrhyu8NCZaCOr0yQ@mail.gmail.com>
Message-ID: <AB3C6430-2C4A-4DD5-BC36-0FCDC0125563@yahoo.com>

On Feb 16, 2015, at 14:18, Neil Girdhar <mistersheik at gmail.com> wrote:

> I think the most consistent way to do this if we would do it is to support parentheses around the "with statement".  We now know that though it's not "trivial", it's not too hard to implement it in the way that Andrew suggests.

Hold on, what did I suggest?

Sure, it's not too hard to add an extragrammatical rule to the language and a simply hacky workaround to the CPython parser. And in principle it's not hard to rewrite Python's grammar unambiguously in a more powerful metalanguage that could be parsed with a more powerful parser. But I'm definitely not suggesting either. 

Just because it's good enough for C doesn't mean it's good enough for Python. (And it's really not even good enough for C. Most modern C-family parsers gave up on trying to pile hacks onto an LALR parser and instead use custom recursive-descent parsers that are nothing but special cases, or rewrite the grammar in GLR or packrat terms.)

The fact that Python is simply LL(1)-parseable is a good thing. You can parse it in your head, predict the ASTs and read them (well, with a bit of pretty-printing indentation). If you can't find a way to add the parens that preserves this property, I think the cost far outweighs the benefit.

(The lexer-based implicit continuation is theoretically not as bad, but practically even worse; just extending the unbalanced-brackets/inexplicable-syntax-error problem to places without unbalanced brackets is enough reason not to do it.)

> I'd consider doing it if there were an approved PEP and a guaranteed fast review.
> 
> Best,
> 
> Neil
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From ethan at stoneleaf.us  Tue Feb 17 00:19:22 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Mon, 16 Feb 2015 15:19:22 -0800
Subject: [Python-ideas] Adding ziplib
In-Reply-To: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>
References: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>
Message-ID: <54E27AFA.8040202@stoneleaf.us>

On 02/16/2015 03:10 PM, Ryan Gonzalez wrote:

> The idea is to reorganize Python's zip support just like the sha and md5 modules were put into the hashlib module. Name?
> ziplib.

Seems like an interesting idea.

> Thoughts?

Have you checked to see if anybody has already done this in the wild?

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/9d26ca1d/attachment.sig>

From mistersheik at gmail.com  Tue Feb 17 00:22:35 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Mon, 16 Feb 2015 18:22:35 -0500
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <AB3C6430-2C4A-4DD5-BC36-0FCDC0125563@yahoo.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <8B01E171-2251-4226-A9B7-4E6A0C3C65CF@yahoo.com>
 <CAA68w_mJgKi3U+w4d90M-ow=gBjqcUoJOdOrhyu8NCZaCOr0yQ@mail.gmail.com>
 <AB3C6430-2C4A-4DD5-BC36-0FCDC0125563@yahoo.com>
Message-ID: <CAA68w_nHvD1PVo8OBjsYQSA5aKrYkryQQ1+b14aRkAJrxKQ8_w@mail.gmail.com>

On Mon, Feb 16, 2015 at 6:17 PM, Andrew Barnert <abarnert at yahoo.com> wrote:

> On Feb 16, 2015, at 14:18, Neil Girdhar <mistersheik at gmail.com> wrote:
>
> > I think the most consistent way to do this if we would do it is to
> support parentheses around the "with statement".  We now know that though
> it's not "trivial", it's not too hard to implement it in the way that
> Andrew suggests.
>
> Hold on, what did I suggest?
>

I thought we could just change the rules:

with_stmt: 'with' exprlist  ':' suite
exprlist: (expr ['as' expr] |star_expr) (',' (expr ['as' expr]|star_expr))*
[',']

And then preclude the use of "as" in all the other exprlist contexts (among
other things) in the validator.


> Sure, it's not too hard to add an extragrammatical rule to the language
> and a simply hacky workaround to the CPython parser. And in principle it's
> not hard to rewrite Python's grammar unambiguously in a more powerful
> metalanguage that could be parsed with a more powerful parser. But I'm
> definitely not suggesting either.
>
> Just because it's good enough for C doesn't mean it's good enough for
> Python. (And it's really not even good enough for C. Most modern C-family
> parsers gave up on trying to pile hacks onto an LALR parser and instead use
> custom recursive-descent parsers that are nothing but special cases, or
> rewrite the grammar in GLR or packrat terms.)
>
> The fact that Python is simply LL(1)-parseable is a good thing. You can
> parse it in your head, predict the ASTs and read them (well, with a bit of
> pretty-printing indentation). If you can't find a way to add the parens
> that preserves this property, I think the cost far outweighs the benefit.
>
>
It would still be LL(1).


> (The lexer-based implicit continuation is theoretically not as bad, but
> practically even worse; just extending the
> unbalanced-brackets/inexplicable-syntax-error problem to places without
> unbalanced brackets is enough reason not to do it.)
>

I'm not sure what this means?


>
> > I'd consider doing it if there were an approved PEP and a guaranteed
> fast review.
> >
> > Best,
> >
> > Neil
> > _______________________________________________
> > Python-ideas mailing list
> > Python-ideas at python.org
> > https://mail.python.org/mailman/listinfo/python-ideas
> > Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/db4c01d2/attachment-0001.html>

From abarnert at yahoo.com  Tue Feb 17 00:32:23 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Mon, 16 Feb 2015 15:32:23 -0800
Subject: [Python-ideas] Adding ziplib
In-Reply-To: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>
References: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>
Message-ID: <0676A880-8750-443A-A953-932E965EDB12@yahoo.com>

I love the idea (I've had to write my own custom wrapper around a subset of functionality for zip and tgz a few times, and I don't remember enjoying it...), and I especially like the idea of starting with the ABCs (which would hopefully encourage all those people writing cpio and 7z and whatever parsers to conform to the same API, even if they do so by wrapping LGPL libs like libarchive or even subprocessing out to command-line tools).

But why "ziplib"? Why not something generic like "archive" that doesn't sound like it's specific to ZIP files?

Also, this definitely seems like something that would be worth putting on PyPI first and then proposing for the stdlib. You're pretty much guaranteed to need a backport for 3.4 and 2.7 users anyway, right?

Sent from a random iPhone

On Feb 16, 2015, at 15:10, Ryan Gonzalez <rymg19 at gmail.com> wrote:

> Python has support for several compression methods, which is great...sort of.
> 
> Recently, I had to port an application that used zips to use tar+gzip. It *sucked*. Why? Inconsistency.
> 
> For instance, zipfile uses write to add files, tar uses add. Then, tar has addfile (which confusingly does not add a file), and zip has writestr, for which a tar alternative has been proposed on the bug tracker but hasn't had any activity for a while.
> 
> Then, zipfile and tarfile's extract methods have slightly different arguments. Zipinfo has namelist, and tarfile as getnames. It's all a mess.
> 
> The idea is to reorganize Python's zip support just like the sha and md5 modules were put into the hashlib module. Name? ziplib.
> 
> It would have a few ABCs:
> 
> - an ArchiveFile class that ZipFile and TarFile would derive from.
> - a BasicArchiveFile class that ArchiveFile extends. bz2, lzma, and (maybe) gzip would be derived from this one.
> - ArchiveFileInfo, which ZipInfo and TarInfo would derive from. ArchiveFile and ArchiveFileInfo go hand-in-hand.
> 
> Now, the APIs would be more consistent. Of course, some have methods that others don't have, but the usage would be easier. zlib would be completely separate, since it has its own API and all.
> 
> I'm not quite sure how gzip fits into this, since it has a very minimal API.
> 
> Thoughts?
> 
> -- 
> Ryan
> If anybody ever asks me why I prefer C++ to C, my answer will be simple: "It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was nul-terminated."
> Personal reality distortion fields are immune to contradictory evidence. - srean
> Check out my website: http://kirbyfan64.github.io/
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/a91ade2d/attachment.html>

From p.f.moore at gmail.com  Tue Feb 17 00:37:16 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 16 Feb 2015 23:37:16 +0000
Subject: [Python-ideas] Adding ziplib
In-Reply-To: <54E27AFA.8040202@stoneleaf.us>
References: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>
 <54E27AFA.8040202@stoneleaf.us>
Message-ID: <CACac1F_5Spzi3kb6eZzB+=jtyjoUF=d0p596HDi5aZK=uWxByw@mail.gmail.com>

On 16 February 2015 at 23:19, Ethan Furman <ethan at stoneleaf.us> wrote:
>> The idea is to reorganize Python's zip support just like the sha and md5 modules were put into the hashlib module. Name?
>> ziplib.
>
> Seems like an interesting idea.

Agreed. A unified interface would be nice for those applications that
want to be able to work with general "archives". There's quite a lot
of code like that around (shutil has some, distutils' sdists can be
zip or tgz files, ...)

>> Thoughts?
>
> Have you checked to see if anybody has already done this in the wild?

That's certainly worth doing. Also, this may be something that would
work better on PyPI - a stdlib module would certainly be a
possibility, but there's so much code out there using the zipfile
module (maybe not so much using tarfile, I'm not sure) that a ziplib
module is unlikely ever to be a *replacement*, just an addition with a
unified interface. Backward compatibility would prevent anything else.
And as a wrapper round stdlib classes, it may find a good home on PyPI
for people that need it.

But your ABC-based approach (which presumably would allow extension to
add in new archive types like 7-zip) seems like a good design.

(Hmm, it seems like I'm saying "put a module on PyPI" a lot at the
moment - that's odd, as I'm normally a strong proponent of the
"batteries included" philosophy. I need to go take my medication...)

Paul

From p.f.moore at gmail.com  Tue Feb 17 00:46:39 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 16 Feb 2015 23:46:39 +0000
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <54E26193.70405@canterbury.ac.nz>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <20150216133158.GN2498@ando.pearwood.info>
 <CAPTjJmqfX4uxd3=9U1daHRd_iU1aFbMj3ZOj4gqRAGtX0EOUtQ@mail.gmail.com>
 <CACac1F-7YMEUNCGDfMjYDeXkwv9C2c_TbQ5_ZCCV5zDbdvt6DQ@mail.gmail.com>
 <20150216171958.GO2498@ando.pearwood.info>
 <54E26193.70405@canterbury.ac.nz>
Message-ID: <CACac1F-_RR5RBxRBeYEaVjp5ravQh3JFGnaBUDX4io_Ep0KoCQ@mail.gmail.com>

On 16 February 2015 at 21:30, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> How about adopting a version of the dangling-operator
> rule suggested earlier, but *only* for commas?
>
> I think it could be handled entirely in the lexer: if
> you sees a comma followed by a newline, and the next
> line is indented further than the current indentation
> level, absorb the newline and indentation.
>
> This would take care of all the statements I can think
> of that can't currently be parenthesised.

One thing that's not been discussed much here is that *any*
line-continuation rules (even the ones that already exist) make the
interactive interpreter's job harder. I don't know exactly how it
works, and whether there's special code in the REPL for things like
backslash continuation, but interactive use (and in particular error
handling) need to be considered in all this.

As well as the standard interpreter, would new rules like this affect
tools like IPython?

(Maybe all the interactive tools work by magic, and the changes can be
isolated to the parser and lexer. That would be nice, and if so my
above comments are irrelevant. But someone should probably check).

Paul

From mistersheik at gmail.com  Tue Feb 17 00:47:29 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Mon, 16 Feb 2015 18:47:29 -0500
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CAO41-mNrXsp0xp1W=v=RbPA6Kbmc1VrqDN3qe9XGkq8yVBEmTw@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
 <CAEbHw4bJOz1w82=Nt3ZU=3o4h-fx9FKzE+dfS4ng=N2-rNLoKQ@mail.gmail.com>
 <CALGmxE+E0O5y4J-NqEQomt-THEVWoPcnJSXrRXJrn-sN6XQKRA@mail.gmail.com>
 <CAMpsgwbCeHW8eHA+EidNDVB9tE7EKFJESBEocy2jK2okwaR6pA@mail.gmail.com>
 <CAO41-mNrXsp0xp1W=v=RbPA6Kbmc1VrqDN3qe9XGkq8yVBEmTw@mail.gmail.com>
Message-ID: <CAA68w_kF-a92ZmG+a5zcc7fsUhgQGBY9bR-40ATt8sBZbWhf4A@mail.gmail.com>

Yes, but that asserts if they're not close.  This just returns a bool.

On Mon, Feb 16, 2015 at 5:57 PM, Ryan Gonzalez <rymg19 at gmail.com> wrote:

> Doesn't Python already have this with unittest.TestCase.assertAlmostEqual
> <https://docs.python.org/3.4/library/unittest.html#unittest.TestCase.assertAlmostEqual>
> ?
>
>
> On Sat, Feb 14, 2015 at 7:26 AM, Victor Stinner <victor.stinner at gmail.com>
> wrote:
>
>> Hi,
>>
>> Do you propose a builtin function? I would prefer to put it in the math
>> module.
>>
>> You may add a new method to unittest.TestCase? It would show all
>> parameters on error. .assertTrue(isclose(...)) doesn't help on error.
>>
>> Victor
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/a6043473/attachment.html>

From mistersheik at gmail.com  Tue Feb 17 00:49:00 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Mon, 16 Feb 2015 18:49:00 -0500
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CACac1F-_RR5RBxRBeYEaVjp5ravQh3JFGnaBUDX4io_Ep0KoCQ@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <20150216133158.GN2498@ando.pearwood.info>
 <CAPTjJmqfX4uxd3=9U1daHRd_iU1aFbMj3ZOj4gqRAGtX0EOUtQ@mail.gmail.com>
 <CACac1F-7YMEUNCGDfMjYDeXkwv9C2c_TbQ5_ZCCV5zDbdvt6DQ@mail.gmail.com>
 <20150216171958.GO2498@ando.pearwood.info> <54E26193.70405@canterbury.ac.nz>
 <CACac1F-_RR5RBxRBeYEaVjp5ravQh3JFGnaBUDX4io_Ep0KoCQ@mail.gmail.com>
Message-ID: <CAA68w_=E2j-kxHMqGvoJn5aqFwaNW7kkbS2BiicW=6kuRxiu_w@mail.gmail.com>

Looks fine to me:

In [1]: with (a as b,
   ...: c as d):
   ...:
  File "<ipython-input-1-cb4fbac821e5>", line 1
    with (a as b,
             ^
SyntaxError: invalid syntax

It knows about bracket continuations and doesn't realize it's a syntax
error until after the fact.

Best,

Neil

On Mon, Feb 16, 2015 at 6:46 PM, Paul Moore <p.f.moore at gmail.com> wrote:

> On 16 February 2015 at 21:30, Greg Ewing <greg.ewing at canterbury.ac.nz>
> wrote:
> > How about adopting a version of the dangling-operator
> > rule suggested earlier, but *only* for commas?
> >
> > I think it could be handled entirely in the lexer: if
> > you sees a comma followed by a newline, and the next
> > line is indented further than the current indentation
> > level, absorb the newline and indentation.
> >
> > This would take care of all the statements I can think
> > of that can't currently be parenthesised.
>
> One thing that's not been discussed much here is that *any*
> line-continuation rules (even the ones that already exist) make the
> interactive interpreter's job harder. I don't know exactly how it
> works, and whether there's special code in the REPL for things like
> backslash continuation, but interactive use (and in particular error
> handling) need to be considered in all this.
>
> As well as the standard interpreter, would new rules like this affect
> tools like IPython?
>
> (Maybe all the interactive tools work by magic, and the changes can be
> isolated to the parser and lexer. That would be nice, and if so my
> above comments are irrelevant. But someone should probably check).
>
> Paul
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/c91030fa/attachment-0001.html>

From ron3200 at gmail.com  Tue Feb 17 02:11:47 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Mon, 16 Feb 2015 20:11:47 -0500
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <f41ecfd2-743e-40e7-bdef-043ff37ac19e@googlegroups.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <f41ecfd2-743e-40e7-bdef-043ff37ac19e@googlegroups.com>
Message-ID: <mbu4gk$pog$1@ger.gmane.org>



On 02/15/2015 05:07 PM, Neil Girdhar wrote:
> I'm +1 on the idea, +1 on the name, but -1 on symmetry.
>
> I'm glad that you included "other approaches" (what people actually do in
> the field).  However, your argument that "is it that hard to just type it
> in" ? the whole point of this method is to give a self-commenting line that
> describes what the reader means.  Doesn't it make sense to say:  is *my
> value* close to *some true value*?  If you wanted a symmetric test in
> english, you might say "are these values close to each other"?

My understanding of the main use of the function is to ask if two values 
can be considered equivalent within some relative and/or absolute error amount.

About those "is it really hard to write?" comments in the PEP.  They refer 
to only the absolute comparisons.  But they aren't as easy to get right as 
most people think.  Floating point is hard enough to understand correctly 
that only a very few bother to write correct comparisons.  ie.. not using 
<=. >=. and == in floating point comparisons and adding an appropriate epsilon.


I think at this point what we need is to make sure we have some very good 
tests and examples that represent the various different uses in different 
circumstances.  That may also answer any concerns any one still might have 
about the various choices.


Cheers,
    Ron




































From rymg19 at gmail.com  Tue Feb 17 02:23:30 2015
From: rymg19 at gmail.com (Ryan Gonzalez)
Date: Mon, 16 Feb 2015 19:23:30 -0600
Subject: [Python-ideas] Treat underscores specially in argument lists
Message-ID: <CAO41-mMH3FNGTC2jBCJB3NKpCD9OXVPn-jeZPbtZ=S734w+cJg@mail.gmail.com>

Often, underscores are used in function argument lists:

def f(_): pass

However, sometimes you want to ignore more than one argument, in which case
this doesn't work:

ryan at DevPC-LX:~$ python3
Python 3.4.1 |Anaconda 2.1.0 (64-bit)| (default, Sep 10 2014, 17:10:18)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> def f(_, _): pass
...
  File "<stdin>", line 1
SyntaxError: duplicate argument '_' in function definition
>>>

My idea is to allow duplication of the underscore argument in the
presumption that no one in their right (or wrong) mind is going to use it
as a real argument, and, if someone does, they need to be slapped on the
head.

-- 
Ryan
If anybody ever asks me why I prefer C++ to C, my answer will be simple:
"It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was
nul-terminated."
Personal reality distortion fields are immune to contradictory evidence. -
srean
Check out my website: http://kirbyfan64.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/6d6ba557/attachment.html>

From rymg19 at gmail.com  Tue Feb 17 02:00:33 2015
From: rymg19 at gmail.com (Ryan Gonzalez)
Date: Mon, 16 Feb 2015 19:00:33 -0600
Subject: [Python-ideas] Adding ziplib
In-Reply-To: <0676A880-8750-443A-A953-932E965EDB12@yahoo.com>
References: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>
 <0676A880-8750-443A-A953-932E965EDB12@yahoo.com>
Message-ID: <CAO41-mMqBm3u+-Lu3RDBdxcW1R3FrPBCUeS6erRzPwMWCf8P2A@mail.gmail.com>

On Mon, Feb 16, 2015 at 5:32 PM, Andrew Barnert <abarnert at yahoo.com> wrote:

> I love the idea (I've had to write my own custom wrapper around a subset
> of functionality for zip and tgz a few times, and I don't remember enjoying
> it...), and I especially like the idea of starting with the ABCs (which
> would hopefully encourage all those people writing cpio and 7z and whatever
> parsers to conform to the same API, even if they do so by wrapping LGPL
> libs like libarchive or even subprocessing out to command-line tools).
>
> But why "ziplib"? Why not something generic like "archive" that doesn't
> sound like it's specific to ZIP files?
>
>
This occurred to me a few moments ago. How about arclib?


> Also, this definitely seems like something that would be worth putting on
> PyPI first and then proposing for the stdlib. You're pretty much guaranteed
> to need a backport for 3.4 and 2.7 users anyway, right?
>
>
I'll look into that.


> Sent from a random iPhone
>
> On Feb 16, 2015, at 15:10, Ryan Gonzalez <rymg19 at gmail.com> wrote:
>
> Python has support for several compression methods, which is great...sort
> of.
>
> Recently, I had to port an application that used zips to use tar+gzip. It
> *sucked*. Why? Inconsistency.
>
> For instance, zipfile uses write to add files, tar uses add. Then, tar has
> addfile (which confusingly does not add a file), and zip has writestr, for
> which a tar alternative has been proposed on the bug tracker
> <http://bugs.python.org/issue22208> but hasn't had any activity for a
> while.
>
> Then, zipfile and tarfile's extract methods have slightly different
> arguments. Zipinfo has namelist, and tarfile as getnames. It's all a mess.
>
> The idea is to reorganize Python's zip support just like the sha and md5
> modules were put into the hashlib module. Name? ziplib.
>
> It would have a few ABCs:
>
> - an ArchiveFile class that ZipFile and TarFile would derive from.
> - a BasicArchiveFile class that ArchiveFile extends. bz2, lzma, and
> (maybe) gzip would be derived from this one.
> - ArchiveFileInfo, which ZipInfo and TarInfo would derive from.
> ArchiveFile and ArchiveFileInfo go hand-in-hand.
>
> Now, the APIs would be more consistent. Of course, some have methods that
> others don't have, but the usage would be easier. zlib would be completely
> separate, since it has its own API and all.
>
> I'm not quite sure how gzip fits into this, since it has a very minimal
> API.
>
> Thoughts?
>
> --
> Ryan
> If anybody ever asks me why I prefer C++ to C, my answer will be simple:
> "It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was
> nul-terminated."
> Personal reality distortion fields are immune to contradictory evidence. -
> srean
> Check out my website: http://kirbyfan64.github.io/
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
>


-- 
Ryan
If anybody ever asks me why I prefer C++ to C, my answer will be simple:
"It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was
nul-terminated."
Personal reality distortion fields are immune to contradictory evidence. -
srean
Check out my website: http://kirbyfan64.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/06110563/attachment.html>

From stephen at xemacs.org  Tue Feb 17 03:14:58 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Tue, 17 Feb 2015 11:14:58 +0900
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <5d60c171-cb52-47f1-931b-6698b152bf21@googlegroups.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <54E12DBE.80909@mrabarnett.plus.com>
 <854mqmd9eo.fsf@benfinney.id.au> <mbrc4f$evu$1@ger.gmane.org>
 <mbrp6m$83d$1@ger.gmane.org> <54E180F2.2060209@canterbury.ac.nz>
 <71fbbfdc-f7cd-470c-89c4-bdaa6fe746d3@googlegroups.com>
 <CACac1F86kAL8ca=9-hGDoaZM7JfNARPzNYqH=yAtLyMPRSw8ig@mail.gmail.com>
 <5d60c171-cb52-47f1-931b-6698b152bf21@googlegroups.com>
Message-ID: <877fvhl1wd.fsf@uwakimon.sk.tsukuba.ac.jp>

Neil Girdhar writes:

 > I agree with that idea in general, but I do think that a with statement 
 > having parentheses would not be confusing for users to parse even though it 
 > would not be LL(1).

In this case, purity beat practicality, because practicality beats
purity on a higher level.

On the lower level, sticking to a pure LL(1) grammar rules out some
practical possibilities that some folks think are nice.  On the higher
level, in theory we'd like Python to provide concise expressions for
all constructs when human readers can handle the concise expression
fluently, but in practice the LL(1) rule is simple to follow (at least
for Python maintainers who actually hack the parser!)

Unfortunately (?!) it also prevents a lot of bike-shedding fun.<wink />


From mistersheik at gmail.com  Tue Feb 17 03:16:47 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Mon, 16 Feb 2015 21:16:47 -0500
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <877fvhl1wd.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <54E12DBE.80909@mrabarnett.plus.com> <854mqmd9eo.fsf@benfinney.id.au>
 <mbrc4f$evu$1@ger.gmane.org> <mbrp6m$83d$1@ger.gmane.org>
 <54E180F2.2060209@canterbury.ac.nz>
 <71fbbfdc-f7cd-470c-89c4-bdaa6fe746d3@googlegroups.com>
 <CACac1F86kAL8ca=9-hGDoaZM7JfNARPzNYqH=yAtLyMPRSw8ig@mail.gmail.com>
 <5d60c171-cb52-47f1-931b-6698b152bf21@googlegroups.com>
 <877fvhl1wd.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <CAA68w_kZWDdBc1n9Dw9S0df0WeSbOAiCHMZMWmZgxLDck-eGDQ@mail.gmail.com>

If you read through the rest of the messages, you'll see that it seems that
we can have parentheses in an LL(1) grammar.

Best,

Neil

On Mon, Feb 16, 2015 at 9:14 PM, Stephen J. Turnbull <stephen at xemacs.org>
wrote:

> Neil Girdhar writes:
>
>  > I agree with that idea in general, but I do think that a with statement
>  > having parentheses would not be confusing for users to parse even
> though it
>  > would not be LL(1).
>
> In this case, purity beat practicality, because practicality beats
> purity on a higher level.
>
> On the lower level, sticking to a pure LL(1) grammar rules out some
> practical possibilities that some folks think are nice.  On the higher
> level, in theory we'd like Python to provide concise expressions for
> all constructs when human readers can handle the concise expression
> fluently, but in practice the LL(1) rule is simple to follow (at least
> for Python maintainers who actually hack the parser!)
>
> Unfortunately (?!) it also prevents a lot of bike-shedding fun.<wink />
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/3bfcdac2/attachment-0001.html>

From barry at python.org  Tue Feb 17 03:36:17 2015
From: barry at python.org (Barry Warsaw)
Date: Mon, 16 Feb 2015 21:36:17 -0500
Subject: [Python-ideas] Allow parentheses to be used with "with" block
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
Message-ID: <20150216213617.3efc8609@anarchist.wooz.org>

On Feb 15, 2015, at 01:52 PM, Neil Girdhar wrote:

>with (a as b,
>         c as d,
>         e as f):
>    suite

I brought this up a few years ago and IIRC, Guido rejected it.  I find that
it's much less of an issue now with Python's contextlib.ExitStack.

Cheers,
-Barry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/aaf2e942/attachment.sig>

From steve at pearwood.info  Tue Feb 17 03:37:29 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Tue, 17 Feb 2015 13:37:29 +1100
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CAA68w_mJgKi3U+w4d90M-ow=gBjqcUoJOdOrhyu8NCZaCOr0yQ@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <8B01E171-2251-4226-A9B7-4E6A0C3C65CF@yahoo.com>
 <CAA68w_mJgKi3U+w4d90M-ow=gBjqcUoJOdOrhyu8NCZaCOr0yQ@mail.gmail.com>
Message-ID: <20150217023729.GQ2498@ando.pearwood.info>

On Mon, Feb 16, 2015 at 05:18:36PM -0500, Neil Girdhar wrote:
> I think the most consistent way to do this if we would do it is to support
> parentheses around the "with statement".  We now know that though it's not
> "trivial", it's not too hard to implement it in the way that Andrew
> suggests.  I'd consider doing it if there were an approved PEP and a
> guaranteed fast review.

Neil, I suggest you go and volunteer on the bug tracker. I posted a link 
to the open issue there. If you ask there, it won't be forgotten, and 
the core devs are more likely to take notice.


-- 
Steve

From steve at pearwood.info  Tue Feb 17 03:41:09 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Tue, 17 Feb 2015 13:41:09 +1100
Subject: [Python-ideas] Treat underscores specially in argument lists
In-Reply-To: <CAO41-mMH3FNGTC2jBCJB3NKpCD9OXVPn-jeZPbtZ=S734w+cJg@mail.gmail.com>
References: <CAO41-mMH3FNGTC2jBCJB3NKpCD9OXVPn-jeZPbtZ=S734w+cJg@mail.gmail.com>
Message-ID: <20150217024108.GR2498@ando.pearwood.info>

On Mon, Feb 16, 2015 at 07:23:30PM -0600, Ryan Gonzalez wrote:
> Often, underscores are used in function argument lists:
> 
> def f(_): pass
> 
> However, sometimes you want to ignore more than one argument, in which case
> this doesn't work:

Why are you ignoring *any* arguments?


-- 
Steve

From rymg19 at gmail.com  Tue Feb 17 03:45:44 2015
From: rymg19 at gmail.com (Ryan Gonzalez)
Date: Mon, 16 Feb 2015 20:45:44 -0600
Subject: [Python-ideas] Treat underscores specially in argument lists
In-Reply-To: <20150217024108.GR2498@ando.pearwood.info>
References: <CAO41-mMH3FNGTC2jBCJB3NKpCD9OXVPn-jeZPbtZ=S734w+cJg@mail.gmail.com>
 <20150217024108.GR2498@ando.pearwood.info>
Message-ID: <CAO41-mN-_LNtsmmWKOF5DrEJ7DZoTuY2tyjHYObJ3N3CmJLMFQ@mail.gmail.com>

It's usually in callbacks where you only care about a few arguments.

On Mon, Feb 16, 2015 at 8:41 PM, Steven D'Aprano <steve at pearwood.info>
wrote:

> On Mon, Feb 16, 2015 at 07:23:30PM -0600, Ryan Gonzalez wrote:
> > Often, underscores are used in function argument lists:
> >
> > def f(_): pass
> >
> > However, sometimes you want to ignore more than one argument, in which
> case
> > this doesn't work:
>
> Why are you ignoring *any* arguments?
>
>
> --
> Steve
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
Ryan
If anybody ever asks me why I prefer C++ to C, my answer will be simple:
"It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was
nul-terminated."
Personal reality distortion fields are immune to contradictory evidence. -
srean
Check out my website: http://kirbyfan64.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/c627e859/attachment.html>

From rob.cliffe at btinternet.com  Tue Feb 17 03:56:09 2015
From: rob.cliffe at btinternet.com (Rob Cliffe)
Date: Tue, 17 Feb 2015 02:56:09 +0000
Subject: [Python-ideas] Allow "assigning" to Ellipse to discard values.
In-Reply-To: <54DCEDCD.7080408@stoneleaf.us>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <54DB695F.2080502@sotecware.net>
 <CAG8k2+4o53qVbOzo4XmdchonFBTEfcWamByWfZ9bao0=XiHauw@mail.gmail.com>
 <54DBB65F.1080108@canterbury.ac.nz> <54DCBF85.3060905@btinternet.com>
 <54DCEDCD.7080408@stoneleaf.us>
Message-ID: <54E2ADC9.9090805@btinternet.com>


On 12/02/2015 18:15, Ethan Furman wrote:
> On 02/12/2015 06:58 AM, Rob Cliffe wrote:
>> On 11/02/2015 20:06, Greg Ewing wrote:
>>> Daniel Holth wrote:
>>>> How about zero characters between commas? In ES6 destructing
>>>> assignment you'd writes something like ,,c = (1,2,3)
>>> Wouldn't play well with the syntax for 1-element tuples:
>>>
>>>     c, = stuff
>>>
>>> Is len(stuff) expected to be 1 or 2?
>> Does that matter?
> Yes, it matters.  I have code that had better raise an exception if 'stuff' is not exactly one element.
>
>> In either case the intent is to evaluate stuff and assign its first element to c.
> No, its intent is to assign the only element to c, and raise if there are more, or fewer, elements.
Hm.  With respect, that doesn't seem to me like a common use case. I 
would have thought that more often a mismatch between the number of 
elements to be unpacked and the number of target variables would be a 
bug.  (OK, perhaps it is but you want the runtime to pick up such bugs.)
>> We could have a rule that when the left hand side of an assignment is a tuple that ends in a final comma then unpacking
>> stops at the final comma, and it doesn't matter whether there are any more values to unpack.  (Perhaps multiple final
>> commas could mean "unpack until the last comma then stop, so
>>      c,,, = stuff
> a) That's ugly.
That's subjective.  (It may look like "grit on Tim's monitor", but it's 
*meaningful* grit, not grit required by overblown language syntax.)
> b) Special cases aren't special enough to break the rules.
I don't understand this comment.
>
>> would mean "raise an exception if len(stuff)<3; otherwise assign the first element of stuff to c".)
>> It could even work for infinite generators!
> Uh, you're kidding, right?
No, I wasn't kidding.  I was suggesting that only as many values as 
required by the LHS should be extracted from the generator.  So there is 
no infinite loop even though the generator itself is (potentially or 
actually) non-terminating.
>
>> But making something "work" instead of raising an exception is likely to break much less code that changing/stopping
>> something from "working".
> It would break mine, so no thanks.
See above.  Your use case doesn't strike me as typical.
>
> --
> ~Ethan~
>
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
>
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2014.0.4800 / Virus Database: 4257/9105 - Release Date: 02/13/15

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/043b992b/attachment-0001.html>

From breamoreboy at yahoo.co.uk  Tue Feb 17 04:05:22 2015
From: breamoreboy at yahoo.co.uk (Mark Lawrence)
Date: Tue, 17 Feb 2015 03:05:22 +0000
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <20150217023729.GQ2498@ando.pearwood.info>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <8B01E171-2251-4226-A9B7-4E6A0C3C65CF@yahoo.com>
 <CAA68w_mJgKi3U+w4d90M-ow=gBjqcUoJOdOrhyu8NCZaCOr0yQ@mail.gmail.com>
 <20150217023729.GQ2498@ando.pearwood.info>
Message-ID: <mbub5r$vtg$1@ger.gmane.org>

On 17/02/2015 02:37, Steven D'Aprano wrote:
> On Mon, Feb 16, 2015 at 05:18:36PM -0500, Neil Girdhar wrote:
>> I think the most consistent way to do this if we would do it is to support
>> parentheses around the "with statement".  We now know that though it's not
>> "trivial", it's not too hard to implement it in the way that Andrew
>> suggests.  I'd consider doing it if there were an approved PEP and a
>> guaranteed fast review.
>
> Neil, I suggest you go and volunteer on the bug tracker. I posted a link
> to the open issue there. If you ask there, it won't be forgotten, and
> the core devs are more likely to take notice.
>

http://bugs.python.org/issue12782 has been closed as "won't fix". 
Presumably it can be reopened if there's enough interest.  On the other 
hand it states "Use a contextlib.ExitStack." which has already been 
referenced here.

Just my ?0.02p worth.

-- 
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence


From stephen at xemacs.org  Tue Feb 17 04:50:57 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Tue, 17 Feb 2015 12:50:57 +0900
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <CAPTjJmqfX4uxd3=9U1daHRd_iU1aFbMj3ZOj4gqRAGtX0EOUtQ@mail.gmail.com>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <20150216133158.GN2498@ando.pearwood.info>
 <CAPTjJmqfX4uxd3=9U1daHRd_iU1aFbMj3ZOj4gqRAGtX0EOUtQ@mail.gmail.com>
Message-ID: <8761b1kxge.fsf@uwakimon.sk.tsukuba.ac.jp>

Chris Angelico writes:

 > But since there aren't many unary/binary collisions anyway
 > - the only ones I can think of are "+" and "-"

"*" and "**" (the latter may require PEP 422?)


From abarnert at yahoo.com  Tue Feb 17 05:01:01 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Mon, 16 Feb 2015 20:01:01 -0800
Subject: [Python-ideas] Adding ziplib
In-Reply-To: <CAO41-mMqBm3u+-Lu3RDBdxcW1R3FrPBCUeS6erRzPwMWCf8P2A@mail.gmail.com>
References: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>
 <0676A880-8750-443A-A953-932E965EDB12@yahoo.com>
 <CAO41-mMqBm3u+-Lu3RDBdxcW1R3FrPBCUeS6erRzPwMWCf8P2A@mail.gmail.com>
Message-ID: <731C8D7E-091C-45AC-94BD-239F204B16B6@yahoo.com>

On Feb 16, 2015, at 17:00, Ryan Gonzalez <rymg19 at gmail.com> wrote:

> On Mon, Feb 16, 2015 at 5:32 PM, Andrew Barnert <abarnert at yahoo.com> wrote:
>> I love the idea (I've had to write my own custom wrapper around a subset of functionality for zip and tgz a few times, and I don't remember enjoying it...), and I especially like the idea of starting with the ABCs (which would hopefully encourage all those people writing cpio and 7z and whatever parsers to conform to the same API, even if they do so by wrapping LGPL libs like libarchive or even subprocessing out to command-line tools).
>> 
>> But why "ziplib"? Why not something generic like "archive" that doesn't sound like it's specific to ZIP files?
> 
> This occurred to me a few moments ago. How about arclib?

Sounds good to me--parallel with hashlib and, unlike archive, arclib is still an unused name on PyPI.

One last thing: it might be worth looking at the libarchive C API and the SWIG wrapper on PyPI to see if there's anything useful to steal for your API design. IIRC, it also has a nifty shim to provide the zipfile and tarfile APIs on top of libarchive, which could be handy for your original use case (making legacy tarfile-based code work with zip archives without much rewriting).

>> Also, this definitely seems like something that would be worth putting on PyPI first and then proposing for the stdlib. You're pretty much guaranteed to need a backport for 3.4 and 2.7 users anyway, right?
> 
> I'll look into that.
>  
>> Sent from a random iPhone
>> 
>> On Feb 16, 2015, at 15:10, Ryan Gonzalez <rymg19 at gmail.com> wrote:
>> 
>>> Python has support for several compression methods, which is great...sort of.
>>> 
>>> Recently, I had to port an application that used zips to use tar+gzip. It *sucked*. Why? Inconsistency.
>>> 
>>> For instance, zipfile uses write to add files, tar uses add. Then, tar has addfile (which confusingly does not add a file), and zip has writestr, for which a tar alternative has been proposed on the bug tracker but hasn't had any activity for a while.
>>> 
>>> Then, zipfile and tarfile's extract methods have slightly different arguments. Zipinfo has namelist, and tarfile as getnames. It's all a mess.
>>> 
>>> The idea is to reorganize Python's zip support just like the sha and md5 modules were put into the hashlib module. Name? ziplib.
>>> 
>>> It would have a few ABCs:
>>> 
>>> - an ArchiveFile class that ZipFile and TarFile would derive from.
>>> - a BasicArchiveFile class that ArchiveFile extends. bz2, lzma, and (maybe) gzip would be derived from this one.
>>> - ArchiveFileInfo, which ZipInfo and TarInfo would derive from. ArchiveFile and ArchiveFileInfo go hand-in-hand.
>>> 
>>> Now, the APIs would be more consistent. Of course, some have methods that others don't have, but the usage would be easier. zlib would be completely separate, since it has its own API and all.
>>> 
>>> I'm not quite sure how gzip fits into this, since it has a very minimal API.
>>> 
>>> Thoughts?
>>> 
>>> -- 
>>> Ryan
>>> If anybody ever asks me why I prefer C++ to C, my answer will be simple: "It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was nul-terminated."
>>> Personal reality distortion fields are immune to contradictory evidence. - srean
>>> Check out my website: http://kirbyfan64.github.io/
>>> _______________________________________________
>>> Python-ideas mailing list
>>> Python-ideas at python.org
>>> https://mail.python.org/mailman/listinfo/python-ideas
>>> Code of Conduct: http://python.org/psf/codeofconduct/
> 
> 
> 
> -- 
> Ryan
> If anybody ever asks me why I prefer C++ to C, my answer will be simple: "It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was nul-terminated."
> Personal reality distortion fields are immune to contradictory evidence. - srean
> Check out my website: http://kirbyfan64.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/25801550/attachment.html>

From rosuav at gmail.com  Tue Feb 17 05:03:55 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Tue, 17 Feb 2015 15:03:55 +1100
Subject: [Python-ideas] Allow parentheses to be used with "with" block
In-Reply-To: <8761b1kxge.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <a6174f75-7505-41e9-90de-b51fb90e6b61@googlegroups.com>
 <CAK9R32Sx+sqiYw+=Sho+fF1-heif6wrGP38Y7WVXUAwJBNjOgQ@mail.gmail.com>
 <20150216133158.GN2498@ando.pearwood.info>
 <CAPTjJmqfX4uxd3=9U1daHRd_iU1aFbMj3ZOj4gqRAGtX0EOUtQ@mail.gmail.com>
 <8761b1kxge.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <CAPTjJmouBGM6oyADkJAw2f8AoD+YLPHgB9vLz3nfuMMTZ7vaXA@mail.gmail.com>

On Tue, Feb 17, 2015 at 2:50 PM, Stephen J. Turnbull <stephen at xemacs.org> wrote:
> Chris Angelico writes:
>
>  > But since there aren't many unary/binary collisions anyway
>  > - the only ones I can think of are "+" and "-"
>
> "*" and "**" (the latter may require PEP 422?)

Ah, I don't really think of those as operators, but I guess they are.
(Fortunately, they are again prefix operators, so it's unambiguous.)
Outside of PEP 422, both of these, in their unary forms, must be
inside parentheses, right? So it's impossible to "trail" them in this
way?

In theory, it'd be possible to break after a unary plus or minus:

value = 5 + -
    3

ChrisA

From me at the-compiler.org  Tue Feb 17 05:20:48 2015
From: me at the-compiler.org (Florian Bruhin)
Date: Tue, 17 Feb 2015 05:20:48 +0100
Subject: [Python-ideas] Adding ziplib
In-Reply-To: <CAO41-mMqBm3u+-Lu3RDBdxcW1R3FrPBCUeS6erRzPwMWCf8P2A@mail.gmail.com>
References: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>
 <0676A880-8750-443A-A953-932E965EDB12@yahoo.com>
 <CAO41-mMqBm3u+-Lu3RDBdxcW1R3FrPBCUeS6erRzPwMWCf8P2A@mail.gmail.com>
Message-ID: <20150217042048.GZ24753@tonks>

* Ryan Gonzalez <rymg19 at gmail.com> [2015-02-16 19:00:33 -0600]:
> On Mon, Feb 16, 2015 at 5:32 PM, Andrew Barnert <abarnert at yahoo.com> wrote:
> 
> > I love the idea (I've had to write my own custom wrapper around a subset
> > of functionality for zip and tgz a few times, and I don't remember enjoying
> > it...), and I especially like the idea of starting with the ABCs (which
> > would hopefully encourage all those people writing cpio and 7z and whatever
> > parsers to conform to the same API, even if they do so by wrapping LGPL
> > libs like libarchive or even subprocessing out to command-line tools).
> >
> > But why "ziplib"? Why not something generic like "archive" that doesn't
> > sound like it's specific to ZIP files?
> >
> >
> This occurred to me a few moments ago. How about arclib?

To me, that sounds like a library to handle ARC files:

http://en.wikipedia.org/wiki/ARC_(file_format)

Florian

-- 
http://www.the-compiler.org | me at the-compiler.org (Mail/XMPP)
   GPG: 916E B0C8 FD55 A072 | http://the-compiler.org/pubkey.asc
         I love long mails! | http://email.is-not-s.ms/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/52fa3b72/attachment-0001.sig>

From abarnert at yahoo.com  Tue Feb 17 05:24:52 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Mon, 16 Feb 2015 20:24:52 -0800
Subject: [Python-ideas] Treat underscores specially in argument lists
In-Reply-To: <CAO41-mN-_LNtsmmWKOF5DrEJ7DZoTuY2tyjHYObJ3N3CmJLMFQ@mail.gmail.com>
References: <CAO41-mMH3FNGTC2jBCJB3NKpCD9OXVPn-jeZPbtZ=S734w+cJg@mail.gmail.com>
 <20150217024108.GR2498@ando.pearwood.info>
 <CAO41-mN-_LNtsmmWKOF5DrEJ7DZoTuY2tyjHYObJ3N3CmJLMFQ@mail.gmail.com>
Message-ID: <7F0DB462-3A15-4DF8-AA95-A0D1525F57D0@yahoo.com>

On Feb 16, 2015, at 18:45, Ryan Gonzalez <rymg19 at gmail.com> wrote:

> It's usually in callbacks where you only care about a few arguments.

A perfect example is using Tk variable tracing in tkinter (for validation or post-change triggering on widgets). The signature of the trace callback is callback(name, index, mode). You almost never need the index, you rarely need the mode, so you write:

    def callback(self, name, _, mode):

In fact, you usually don't need the mode either (in fact, you often don't even need the name), but you can't write this:

   def callback(self, name, _, _):

So instead you usually write something like:

   def callback(self, name, *_):

Personally, I find *_ ugly, so I use *dummy instead. But whatever.

So anyway, what if you needed the index, but not the name or mode? You can't use *_ to get around that; you have to come up with two dummy names. That's a bit annoying.

The question is, how often do you really need to ignore multiple parameters, but not all of them (or all of them from this point on)? I can't remember any callback API that I've used where that came up, but I can imagine someone else has one:

> On Mon, Feb 16, 2015 at 8:41 PM, Steven D'Aprano <steve at pearwood.info> wrote:
>> On Mon, Feb 16, 2015 at 07:23:30PM -0600, Ryan Gonzalez wrote:
>> > Often, underscores are used in function argument lists:
>> >
>> > def f(_): pass
>> >
>> > However, sometimes you want to ignore more than one argument, in which case
>> > this doesn't work:
>> 
>> Why are you ignoring *any* arguments?
>> 
>> 
>> --
>> Steve
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
> 
> 
> 
> -- 
> Ryan
> If anybody ever asks me why I prefer C++ to C, my answer will be simple: "It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was nul-terminated."
> Personal reality distortion fields are immune to contradictory evidence. - srean
> Check out my website: http://kirbyfan64.github.io/
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/06d20323/attachment.html>

From chris.barker at noaa.gov  Tue Feb 17 06:01:41 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Mon, 16 Feb 2015 21:01:41 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <CAO41-mNrXsp0xp1W=v=RbPA6Kbmc1VrqDN3qe9XGkq8yVBEmTw@mail.gmail.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <20150206000438.1adeb478@fsol>
 <CALGmxEJXJQTGdpjttg2LNmjJVTrbuMW0bNiAC9-MFX65LJAORQ@mail.gmail.com>
 <20150206132852.2974b66c@fsol>
 <CACac1F8xBxEpSXU4EtqohJSXYKMbH_48gZo0=1XycwN8jD7ttw@mail.gmail.com>
 <CADiSq7du-M+tCTvaU0PdRCRN01x=2JMFS7Bw3Q13a4x0aSaY-w@mail.gmail.com>
 <CAEbHw4bJOz1w82=Nt3ZU=3o4h-fx9FKzE+dfS4ng=N2-rNLoKQ@mail.gmail.com>
 <CALGmxE+E0O5y4J-NqEQomt-THEVWoPcnJSXrRXJrn-sN6XQKRA@mail.gmail.com>
 <CAMpsgwbCeHW8eHA+EidNDVB9tE7EKFJESBEocy2jK2okwaR6pA@mail.gmail.com>
 <CAO41-mNrXsp0xp1W=v=RbPA6Kbmc1VrqDN3qe9XGkq8yVBEmTw@mail.gmail.com>
Message-ID: <CALGmxE+tKjJ4b2aS8ZZasR8q4-4SmWJ-cCByKxG_jySNSo1waA@mail.gmail.com>

On Mon, Feb 16, 2015 at 2:57 PM, Ryan Gonzalez <rymg19 at gmail.com> wrote:

> Doesn't Python already have this with unittest.TestCase.assertAlmostEqual
> <https://docs.python.org/3.4/library/unittest.html#unittest.TestCase.assertAlmostEqual>
> ?
>

Please read the PEP -- assertAlmostEQual is an absolute tolerance test --
not a relative tolerance.

-Chris



>
> On Sat, Feb 14, 2015 at 7:26 AM, Victor Stinner <victor.stinner at gmail.com>
> wrote:
>
>> Hi,
>>
>> Do you propose a builtin function? I would prefer to put it in the math
>> module.
>>
>> You may add a new method to unittest.TestCase? It would show all
>> parameters on error. .assertTrue(isclose(...)) doesn't help on error.
>>
>> Victor
>>
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
>
>
> --
> Ryan
> If anybody ever asks me why I prefer C++ to C, my answer will be simple:
> "It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was
> nul-terminated."
> Personal reality distortion fields are immune to contradictory evidence. -
> srean
> Check out my website: http://kirbyfan64.github.io/
>



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/eb0bf017/attachment.html>

From chris.barker at noaa.gov  Tue Feb 17 06:11:11 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Mon, 16 Feb 2015 21:11:11 -0800
Subject: [Python-ideas] PEP 485: A Function for testing approximate
	equality
In-Reply-To: <f41ecfd2-743e-40e7-bdef-043ff37ac19e@googlegroups.com>
References: <CALGmxE+5yutpEreb3XJdt-ZSoT4ve54VQAb24-xVimkzDmtVFw@mail.gmail.com>
 <f41ecfd2-743e-40e7-bdef-043ff37ac19e@googlegroups.com>
Message-ID: <CALGmxE+Ng4kvmYh9MPuvf6amP0D9VBeBWsCCMMeZv8Mrvaqpbg@mail.gmail.com>

On Sun, Feb 15, 2015 at 2:07 PM, Neil Girdhar <mistersheik at gmail.com> wrote:

> I'm +1 on the idea, +1 on the name, but -1 on symmetry.
>
> I'm glad that you included "other approaches" (what people actually do in
> the field).  However, your argument that "is it that hard to just type it
> in" ? the whole point of this method is to give a self-commenting line that
> describes what the reader means.
>

yes. sure.


>  Doesn't it make sense to say:  is *my value* close to *some true value*?
> If you wanted a symmetric test in english, you might say "are these values
> close to each other"?
>

exactly -- it also makes sense to ask "are these values close to each
other" The problem is that you can't have one function that does both
(without flags or optional parameters. After a lengthy discussion, I
decided that the later was a bit more useful and less surprising.

And do read carefully -- if your tolerance is small -- it doesn't matter at
all.

-Chris




-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150216/fa3ec256/attachment-0001.html>

From tjreedy at udel.edu  Tue Feb 17 06:19:17 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Tue, 17 Feb 2015 00:19:17 -0500
Subject: [Python-ideas] Gettext syntax (was Re: Allow "assigning" to ...)
In-Reply-To: <20150213112610.0ddafe47@anarchist.wooz.org>
References: <SNT405-EAS412F35C9752F901C7315C0EBE240@phx.gbl>
 <CALGmxEJWt+MkyePSgvKnxGvi7f-6rLaB7dFbuUFPb4qb8N8Ajw@mail.gmail.com>
 <20150211164615.1fe33cab@fsol>
 <CADiSq7cTrFzhy30zjn0Wz6wacqSQH_1iyw==UJomCL6b8xr9_Q@mail.gmail.com>
 <mbh6u0$fr7$1@ger.gmane.org> <20150212110929.1cc16d6c@anarchist.wooz.org>
 <mbjkks$cec$1@ger.gmane.org> <20150213112610.0ddafe47@anarchist.wooz.org>
Message-ID: <mbuj0n$gmv$1@ger.gmane.org>

On 2/13/2015 11:26 AM, Barry Warsaw wrote:
> On Feb 12, 2015, at 08:39 PM, Terry Reedy wrote:

>> I18n of existing code would be easier if Tools/i18n had a stringmark.py
>> script that would either mark all strings with _() (or alternative) or
>> interactively let one say yes or no to each.
>
> I'm not sure what you're asking for; do you want something you could point at
> a Python file and it would rewrite it after interactively asking for
> translatability or not?

Yes.

>  Hmm, maybe that would be cool.  I look forward to the
> contribution. <wink>

If I end up needing to do much marking, I will consider doing so.

-- 
Terry Jan Reedy


From luciano at ramalho.org  Tue Feb 17 07:47:32 2015
From: luciano at ramalho.org (Luciano Ramalho)
Date: Tue, 17 Feb 2015 04:47:32 -0200
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <CAP7+vJ+m=ENutvvNxUdTSF4z_CGmep4-JRVuQ6a0p7_EAnXaxA@mail.gmail.com>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
 <CAP7+vJ+m=ENutvvNxUdTSF4z_CGmep4-JRVuQ6a0p7_EAnXaxA@mail.gmail.com>
Message-ID: <CALxg4FXqeowb0SE-nr2bRw0kd_nGGXDoLEpCZrnMP3f+LKFzZg@mail.gmail.com>

On Mon, Feb 16, 2015 at 8:30 PM, Guido van Rossum <guido at python.org> wrote:
> This is brought up from time to time. I believe it is based on a
> misunderstanding of coroutines.

Thanks, Guido, if you could elaborate that would be great.

Meanwhile I'll try to guess what you may be referring to, please
correct me if I am wrong: the misunderstanding has to do with the fact
that between the priming next() and the first send(), the client code
should be doing something interesting. If that is not the case, then
the client code and the coroutine are not cooperating fully. Is that
it?

Best,

Luciano



>
> On Mon, Feb 16, 2015 at 4:53 AM, Luciano Ramalho <luciano at ramalho.org>
> wrote:
>>
>> Coroutines are useless until primed by a next(my_coro) call. There are
>> decorators to solve this by returning a primed coroutine. Such
>> decorators add another problem: now the user may be unsure whether a
>> specific coroutine from a third-party API should be primed or not
>> prior to use.
>>
>> How about having a built-in function named send() with the signature
>> send(coroutine, value) that does the right thing: sends a value to the
>> coroutine, priming it if necessary. So instead of writing this:
>>
>> >>> next(my_coro)
>> >>> my_coro.send(a_value)
>>
>> You could always write this:
>>
>> >>> send(my_coro, a_value)
>>
>> At a high level, the behavior of send() would be like this:
>>
>> def send(coroutine, value):
>>     if inspect.getgeneratorstate() == 'GEN_CREATED':
>>         next(coroutine)
>>     coroutine.send(value)
>>
>> Consider the above pseudo-code. Proper handling of other generator
>> states should be added.
>>
>> This would make coroutines easier to use, and would make their basic
>> usage more consistent with the usage of plain generators using a
>> function syntax -- eg. next(my_gen) -- instead of method call syntax.
>>
>> What do you think? If this has been discussed before, please send me a
>> link (I searched but came up empty).
>>
>> Best,
>>
>> Luciano
>>
>>
>>
>> --
>> Luciano Ramalho
>> Twitter: @ramalhoorg
>>
>> Professor em: http://python.pro.br
>> Twitter: @pythonprobr
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>
>
>
>
> --
> --Guido van Rossum (python.org/~guido)



-- 
Luciano Ramalho
Twitter: @ramalhoorg

Professor em: http://python.pro.br
Twitter: @pythonprobr

From stephen at xemacs.org  Tue Feb 17 08:58:55 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Tue, 17 Feb 2015 16:58:55 +0900
Subject: [Python-ideas] Treat underscores specially in argument lists
In-Reply-To: <7F0DB462-3A15-4DF8-AA95-A0D1525F57D0@yahoo.com>
References: <CAO41-mMH3FNGTC2jBCJB3NKpCD9OXVPn-jeZPbtZ=S734w+cJg@mail.gmail.com>
 <20150217024108.GR2498@ando.pearwood.info>
 <CAO41-mN-_LNtsmmWKOF5DrEJ7DZoTuY2tyjHYObJ3N3CmJLMFQ@mail.gmail.com>
 <7F0DB462-3A15-4DF8-AA95-A0D1525F57D0@yahoo.com>
Message-ID: <87vbj1j7eo.fsf@uwakimon.sk.tsukuba.ac.jp>

Andrew Barnert writes:

 > So anyway, what if you needed the index, but not the name or mode? 
 > You can't use *_ to get around that; you have to come up with two
 > dummy names. That's a bit annoying.

If there's one such function, I'd say "suck it up".  If there are two
or more, hit it with a two-by-four and it will submit:

def twobyfour (func):
    def wrapper(real1, dummy1, real2, dummy2):
        return func(real1, real2)
    return wrapper

 > >> Why are you ignoring *any* arguments?

I wouldn't express the idea that way.  Obviously we're ignoring
arguments because there's a caller that supplies them.

Rather, I'd say "in my experience ignoring arguments means a badly
designed API that should be refactored."  That's true in callbacks as
well as in any other function.  Granted, sometimes you can't refactor,
but I think more often than not ignoring arguments is a code smell and
you can.


From jeanpierreda at gmail.com  Tue Feb 17 09:04:13 2015
From: jeanpierreda at gmail.com (Devin Jeanpierre)
Date: Tue, 17 Feb 2015 00:04:13 -0800
Subject: [Python-ideas] Treat underscores specially in argument lists
In-Reply-To: <87vbj1j7eo.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <CAO41-mMH3FNGTC2jBCJB3NKpCD9OXVPn-jeZPbtZ=S734w+cJg@mail.gmail.com>
 <20150217024108.GR2498@ando.pearwood.info>
 <CAO41-mN-_LNtsmmWKOF5DrEJ7DZoTuY2tyjHYObJ3N3CmJLMFQ@mail.gmail.com>
 <7F0DB462-3A15-4DF8-AA95-A0D1525F57D0@yahoo.com>
 <87vbj1j7eo.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <CABicbJ+x=RxUWVmpFWtSgznj5+s2ysZUOW3VRHAF_aRVojh4Qw@mail.gmail.com>

Our experiences differ, then. I've never seen arguments unused that
wasn't fairly justifiable and OK with me.

OTOH, I *have* seen code break because it renamed a parameter to
"unused_foo" instead of foo, and when the method was called with
foo=2... kablooie.

-1 on encouraging the antipattern of renaming your public interfaces
to indicate implementation details.

-- Devin

On Mon, Feb 16, 2015 at 11:58 PM, Stephen J. Turnbull
<stephen at xemacs.org> wrote:
> Andrew Barnert writes:
>
>  > So anyway, what if you needed the index, but not the name or mode?
>  > You can't use *_ to get around that; you have to come up with two
>  > dummy names. That's a bit annoying.
>
> If there's one such function, I'd say "suck it up".  If there are two
> or more, hit it with a two-by-four and it will submit:
>
> def twobyfour (func):
>     def wrapper(real1, dummy1, real2, dummy2):
>         return func(real1, real2)
>     return wrapper
>
>  > >> Why are you ignoring *any* arguments?
>
> I wouldn't express the idea that way.  Obviously we're ignoring
> arguments because there's a caller that supplies them.
>
> Rather, I'd say "in my experience ignoring arguments means a badly
> designed API that should be refactored."  That's true in callbacks as
> well as in any other function.  Granted, sometimes you can't refactor,
> but I think more often than not ignoring arguments is a code smell and
> you can.
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From abarnert at yahoo.com  Tue Feb 17 10:38:35 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Tue, 17 Feb 2015 01:38:35 -0800
Subject: [Python-ideas] Treat underscores specially in argument lists
In-Reply-To: <87vbj1j7eo.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <CAO41-mMH3FNGTC2jBCJB3NKpCD9OXVPn-jeZPbtZ=S734w+cJg@mail.gmail.com>
 <20150217024108.GR2498@ando.pearwood.info>
 <CAO41-mN-_LNtsmmWKOF5DrEJ7DZoTuY2tyjHYObJ3N3CmJLMFQ@mail.gmail.com>
 <7F0DB462-3A15-4DF8-AA95-A0D1525F57D0@yahoo.com>
 <87vbj1j7eo.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <03953895-3B24-47D2-94ED-25E23846DDCE@yahoo.com>

On Feb 16, 2015, at 23:58, "Stephen J. Turnbull" <stephen at xemacs.org> wrote:

> Andrew Barnert writes:
> 
>> So anyway, what if you needed the index, but not the name or mode? 
>> You can't use *_ to get around that; you have to come up with two
>> dummy names. That's a bit annoying.
> 
> If there's one such function, I'd say "suck it up".  If there are two
> or more, hit it with a two-by-four and it will submit:
> 
> def twobyfour (func):
>    def wrapper(real1, dummy1, real2, dummy2):
>        return func(real1, real2)
>    return wrapper
> 
>>>> Why are you ignoring *any* arguments?
> 
> I wouldn't express the idea that way.  Obviously we're ignoring
> arguments because there's a caller that supplies them.
> 
> Rather, I'd say "in my experience ignoring arguments means a badly
> designed API that should be refactored."  That's true in callbacks as
> well as in any other function.  Granted, sometimes you can't refactor,
> but I think more often than not ignoring arguments is a code smell and
> you can.

Sure, the tkinter var tracing callback interface is not very Pythonic--but that's because it's a thin wrapper around a Tcl interface. And keeping it thin means that when tkinter doesn't document everything, you can go real the Tcl/Tk docs, so I don't think you'd want to change that. 

(In fact, years ago, I actually did write a higher-level validation framework for Tkinter apps to deal with the haphazard variety or bindable events, validate functions, and tracing functions that different widgets provide in a more consistent and Pythonic way. I found it to be a bit easier to read, but a whole lot harder to debug, so I stopped using it.)

And the same goes for C callback functions, ObjC blocks, JS DOM callbacks, PyCOM event handlers, etc. Or even large legacy Python frameworks. You pick the best framework for the job, and if it has parts of its interface that aren't to your liking, sometimes you have to deal with that.

From lkb.teichmann at gmail.com  Tue Feb 17 11:10:00 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Tue, 17 Feb 2015 11:10:00 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
Message-ID: <CAK9R32QpPHXRT2AiDaE=7KKeTtFYvc4oXcZWnusCykW2dGndgA@mail.gmail.com>

Hi Petr, Hi Andrew, Hi list,

(responding to my own post, as I cannot respond to Petr's and Andrew's
at the same time)

> - is it desirable to have a programmatic way to combine metaclasses?

there seems to be general agreement that there should be no automatic
combination of metaclasses, while many think that explicit combination
should be possible.

The only way to do that currently is by subclassing existing metaclasses.
This is typically done by the user of the metaclass, not its author.
It would be a nice thing if an author could do that. Petr argues s/he
already can, by subclassing. Unfortunately, this is not so easy as this
means that every user of such a class then has to use all inherited
metaclasses.

As an example: imagine you have a metaclass that mixes well with
PyQt's metaclass. Then you could simply inherit from it. But this
would mean that every user of you class also has to install PyQt,
no matter if the code has anything to do with it.

Now Thomas argues that this should simply not be done at all.
I understand his argument well that multiple inheritance across
project boundaries is probably a bad idea in most cases.

But I have an example where it does make a lot of sense: ABCs.
It would just be nice if one could implement and ABC which is a
QObject. That's what ABCs are for. Sure, you can do this by
registering the class, but this means you loose a lot like already
predefined methods of an ABC. But maybe we can just
special-case ABCs.

> - if yes, how should that be done?

So I proposed two ways how this could be implemented:
One is to change the current metaclass combination code to check
whether a class is a subclass not by PyType_IsSubtype, but by
PyObject_IsSubclass, meaning that the __subclasshook__ is
called. This has the advantage of being only a minimal change
to the python core, I'm not sure how many people out there
new at all how the subclass detection was done.

It is also very systematic: it is very analog to ABCs, just the
other way around. An AbstractMetaClass would simply now
which classes exist and how they can be combined. Either by
inheriting from AbstractMetaClass, or by registering a foreign
class.

This way has the drawback of being rather complex on the python
side. But actually, I prefer complexity on the Python- over
complexity on the C side.

This proposal is at
https://github.com/tecki/cpython/commits/metaclass-issubclass

My other proposal was to have a way to explicitly combine
(meta)classes, by having a method in the type object (I originally
called that __add__, people didn't like that so I renamed it to
merge).

This idea has somewhat more code on the C side, but is simpler
on the python side. It might also be easier to grasp by newbies.
Otherwise it is about as powerful as the other idea.

This proposal is at
https://github.com/tecki/cpython/commits/metaclass-merge

Once we have one of my ideas implemented, I think we can
implement PEP 422 in the standard library. Petr claims that this
is just doing simple things with complex code. I disagree, I think
that the current metaclass code in the python interpreter is already
incredibly context, and just throwing more code at it is actually
a very complex solution. Sure, on the python side all looks pretty,
but only because we swept all complexity under the C carpet.
But I won't fight for that point. What I am fighting for is to modify
PEP 422 in a way that it can be backported to older versions of
python and be put on PyPI. This means that __init_class__ should
become __init_subclass__, meaning that only subclasses are
initialized. (__init_class__ doesn't work properly, as super() won't
work. This problem already exists in the zope version).

There is also a completely different way we could solve all those
problems: abandon metaclasses for good, and replace them
eventually with a PEP 422 like approach. This would mean that
on the long run the following should be added:

- a __new_subclass__ as a replacement for the metaclass __new__.
This shows that it is necessary to operate on subclasses, otherwise
one will have a hard time to create the class itself...

- __subclasscheck__ and __instancecheck__ must be called on the
class, not the metaclass

- sometimes metaclasses are used to add methods to the class only,
that cannot be reached by an instance. A new @classonlymethod
decorator should do that.

- I'm sure I forgot something.

A pure python version of this proposal (it would go into C, for sure)
can be found here:
https://github.com/tecki/metaclasses

Note that this is much more complicated as it would need to be if
only the original PEP422 ideas were implemented.

This was a long post.

Enjoy.

Greetings

Martin

From steve at pearwood.info  Tue Feb 17 13:10:47 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Tue, 17 Feb 2015 23:10:47 +1100
Subject: [Python-ideas] Treat underscores specially in argument lists
In-Reply-To: <7F0DB462-3A15-4DF8-AA95-A0D1525F57D0@yahoo.com>
References: <CAO41-mMH3FNGTC2jBCJB3NKpCD9OXVPn-jeZPbtZ=S734w+cJg@mail.gmail.com>
 <20150217024108.GR2498@ando.pearwood.info>
 <CAO41-mN-_LNtsmmWKOF5DrEJ7DZoTuY2tyjHYObJ3N3CmJLMFQ@mail.gmail.com>
 <7F0DB462-3A15-4DF8-AA95-A0D1525F57D0@yahoo.com>
Message-ID: <20150217121040.GS2498@ando.pearwood.info>

On Mon, Feb 16, 2015 at 08:24:52PM -0800, Andrew Barnert wrote:
> On Feb 16, 2015, at 18:45, Ryan Gonzalez <rymg19 at gmail.com> wrote:
> 
> > It's usually in callbacks where you only care about a few arguments.
> 
> A perfect example is using Tk variable tracing in tkinter (for 
> validation or post-change triggering on widgets). The signature of the 
> trace callback is callback(name, index, mode). You almost never need 
> the index, you rarely need the mode, so you write:
> 
>     def callback(self, name, _, mode):

> In fact, you usually don't need the mode either (in fact, you often 
> don't even need the name), but you can't write this:
> 
>    def callback(self, name, _, _):
> 
> So instead you usually write something like:
> 
>    def callback(self, name, *_):

Do I? I'm pretty sure I don't. I'm too worried about future maintainers, 
who will probably be violent psychopaths who know where I live.

If the callback has signature name, index, mode, I write name, index, 
mode, and make it clear that the callback function is expected to 
receive exactly three arguments, and what they are.

That way, I don't have to care whether the caller passes those arguments 
by position or name. I don't have to change the signature of my callback 
if I decide in the future that I actually do want to use the index and 
the mode. If I look at the arguments in a debugger, I'll see sensible 
names for all three parameters. If I use some sort of extended traceback 
(e.g. the cgitb module), I'll see exactly what the parameters were. And 
if something goes wrong, I won't have to worry about future maintainers 
tracking me back to my home.



-- 
Steve

From ncoghlan at gmail.com  Tue Feb 17 13:38:16 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 17 Feb 2015 22:38:16 +1000
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
Message-ID: <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>

On 17 February 2015 at 02:19, Martin Teichmann <lkb.teichmann at gmail.com> wrote:
> Hi list,
>
>> Again, Python already has a metaclass that everyone is supposed to
>> use, namely `type`.
>
> well, there is a problem out here: people are indeed using other
> metaclasses. Even the standard library, like ABC or enums.
>
> And there is a problem here in python: once you do multiple inheritance,
> and two classes have different metaclasses, there is no way of
> programmatically determining an appropriate metaclass.

Outside the simple cases (which PEP 422 could theoretically handle),
metaclasses tend to be contravariant.

EnumMeta instances deliberately don't support some things a normal
type instance (i.e. class object) will support (like inheriting from
them). The same is true for ABCMeta instances (e.g. you often can't
instantiate them). And yet:

>>> from abc import ABCMeta
>>> issubclass(ABCMeta, type)
True
>>> from enum import EnumMeta
>>> issubclass(EnumMeta, type)
True

You're asking us to assume covariant metaclasses in a subset of cases,
while retaining the assumption of contravariance otherwise. That isn't
going to happen - we expect type designers to declare metaclass
covariance by creating an explicit subclass and using it where
appropriate..

PEP 422 bypasses the default assumption of metaclass contravariance by
operating directly on type instances, without touching any other part
of the metaclass machinery.

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From ncoghlan at gmail.com  Tue Feb 17 13:47:03 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 17 Feb 2015 22:47:03 +1000
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <D7D9F7F3-0569-476D-86AD-D19DAF423C8E@yahoo.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <D7D9F7F3-0569-476D-86AD-D19DAF423C8E@yahoo.com>
Message-ID: <CADiSq7eYfA86oVOrKni_fyPrbwegCD6A4buVYg8KpHQFBiQr1w@mail.gmail.com>

On 17 February 2015 at 08:27, Andrew Barnert
<abarnert at yahoo.com.dmarc.invalid> wrote:
> On Feb 16, 2015, at 8:19, Martin Teichmann <lkb.teichmann at gmail.com> wrote:
>
>> Hi list,
>>
>>> Again, Python already has a metaclass that everyone is supposed to
>>> use, namely `type`.
>>
>> well, there is a problem out here: people are indeed using other
>> metaclasses. Even the standard library, like ABC or enums.
>
> But almost all such metaclasses are instances (and direct subclasses, but that's not as important here) of type; I think he should have said "a type that everyone can usually use as a metaclass, and can at least pretty much always use as a metametaclass when that isn't appropriate".
>
> Anyway, for the problem you're trying to solve: there _is_ a real problem that there are a handful of popular metaclasses (in the stdlib, in PyQt, maybe elsewhere) that people other than the implementors often need to combine with other metaclasses, and that's hard, and unless PEP 422 makes all those metaclasses unnecessary, you still need a solution, right?

There *is* a way to combine instances of multiple metaclasses without
conflict: using composition ("has-a") rather than inheritance
("is-a"). A metaclass conflict can reasonably be described as a class
definition experiencing an unresolvable identity crisis by way of
trying to be too many different things at the same time :)

And if it's really 100% absolutely essential, then a custom subclass
of the relevant metaclasses can do the job. But the only "simple"
metaclass is "type", and that's really only considered simple because
it's the one we use by default, which means it's the baseline against
which the behaviour of all other metaclasses gets compared.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From g.rodola at gmail.com  Tue Feb 17 13:57:20 2015
From: g.rodola at gmail.com (Giampaolo Rodola')
Date: Tue, 17 Feb 2015 13:57:20 +0100
Subject: [Python-ideas] Treat underscores specially in argument lists
In-Reply-To: <7F0DB462-3A15-4DF8-AA95-A0D1525F57D0@yahoo.com>
References: <CAO41-mMH3FNGTC2jBCJB3NKpCD9OXVPn-jeZPbtZ=S734w+cJg@mail.gmail.com>
 <20150217024108.GR2498@ando.pearwood.info>
 <CAO41-mN-_LNtsmmWKOF5DrEJ7DZoTuY2tyjHYObJ3N3CmJLMFQ@mail.gmail.com>
 <7F0DB462-3A15-4DF8-AA95-A0D1525F57D0@yahoo.com>
Message-ID: <CAFYqXL_S78cNP-U3_x8y+ymcQTMFqo-DPVWF984Bc5pFtrLMkA@mail.gmail.com>

On Tue, Feb 17, 2015 at 5:24 AM, Andrew Barnert <
abarnert at yahoo.com.dmarc.invalid> wrote:

> On Feb 16, 2015, at 18:45, Ryan Gonzalez <rymg19 at gmail.com> wrote:
>
> It's usually in callbacks where you only care about a few arguments.
>
>
> A perfect example is using Tk variable tracing in tkinter (for validation
> or post-change triggering on widgets). The signature of the trace callback
> is callback(name, index, mode). You almost never need the index, you rarely
> need the mode, so you write:
>
>     def callback(self, name, _, mode):
>
> In fact, you usually don't need the mode either (in fact, you often don't
> even need the name), but you can't write this:
>
>    def callback(self, name, _, _):
>
> So instead you usually write something like:
>
>    def callback(self, name, *_):
>
> Personally, I find *_ ugly, so I use *dummy instead. But whatever.
>
> So anyway, what if you needed the index, but not the name or mode? You
> can't use *_ to get around that; you have to come up with two dummy names.
> That's a bit annoying.
>
> The question is, how often do you really need to ignore multiple
> parameters, but not all of them (or all of them from this point on)? I
> can't remember any callback API that I've used where that came up
>

Me neither. I remember doing this:

a, _, _, _, b, c = foo

...but never wanting to do a similar thing applied to a function signature.
A function strikes me as something "more important" than "a, _, _, _, b, c
= foo" appearing in the middle of the code, something which mandates
explicitness, even when some args are ignored, in fact I don't remember
ever seeing "def callback(self, name, _)".
-1

-- 
Giampaolo - http://grodola.blogspot.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/05d96678/attachment.html>

From lkb.teichmann at gmail.com  Tue Feb 17 14:06:56 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Tue, 17 Feb 2015 14:06:56 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
Message-ID: <CAK9R32SaL7+RLJwKbbWBinteMe68v=zL-b=js1dv97zm8-3i4A@mail.gmail.com>

> You're asking us to assume covariant metaclasses in a subset of cases,
> while retaining the assumption of contravariance otherwise. That isn't
> going to happen - we expect type designers to declare metaclass
> covariance by creating an explicit subclass and using it where
> appropriate..

How do you solve my example with ABCs? That means if a
QObject (from PyQt) tries to implement an ABC by inheritance,
then one first has to create a new metaclass inheriting from
ABCMeta and whatever the PyQt metaclass is.

Sure, one could register the class to the ABC, but then you
loose the option of inheriting stuff from the ABC. One could
also write an adaptor - so using composition. But to me that
defeats the point of the concept of ABCs: that you can make
anything fit appropriately. In general, that smells to me like
a lot of boilerplate code.

> PEP 422 bypasses the default assumption of metaclass contravariance by
> operating directly on type instances, without touching any other part
> of the metaclass machinery.

So there is a problem with the metaclass machinery, and we
solve that by making one certain special case work? When is the
next special case going to show up? Will it also be included
into type?

Greetings

Martin

From klahnakoski at mozilla.com  Tue Feb 17 14:21:25 2015
From: klahnakoski at mozilla.com (Kyle Lahnakoski)
Date: Tue, 17 Feb 2015 08:21:25 -0500
Subject: [Python-ideas] Fwd: Re: PEP 485: A Function for testing approximate
 equality
In-Reply-To: <54E33E5A.7030907@mozilla.com>
References: <54E33E5A.7030907@mozilla.com>
Message-ID: <54E34055.1080608@mozilla.com>



Chris Barker,

If `isclose()` function is to be part of the standard library I would
ask that the `rel_tol` and `abs_tol` both accept -log10 scaled  values:

Instead of
>    isclose(a, b, rel_tol=1e-9, abs_tol=0.0)

we have
>    isclose(a, b, rel_tol=9, abs_tol=-INF)

`rel_tol` can be understood in terms of number of significant digits.  
`abs_tol` is understood in terms of fixed decimal digits.


Thanks for noticing the ubiquitous need for such function.




>
> On Thursday, February 5, 2015 at 12:14:25 PM UTC-5, Chris Barker -
> NOAA Federal wrote:
>
>     Hi folks,
>
>     Time for Take 2 (or is that take 200?)
>
>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/ce065771/attachment.html>

From encukou at gmail.com  Tue Feb 17 14:24:46 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Tue, 17 Feb 2015 14:24:46 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAK9R32QpPHXRT2AiDaE=7KKeTtFYvc4oXcZWnusCykW2dGndgA@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CAK9R32QpPHXRT2AiDaE=7KKeTtFYvc4oXcZWnusCykW2dGndgA@mail.gmail.com>
Message-ID: <CA+=+wqAVQmnUNyTE3=Kcm3FCwW_QGJp-m66w_xX8pr3NFjw11w@mail.gmail.com>

On Tue, Feb 17, 2015 at 11:10 AM, Martin Teichmann
<lkb.teichmann at gmail.com> wrote:
> Hi Petr, Hi Andrew, Hi list,
>
> (responding to my own post, as I cannot respond to Petr's and Andrew's
> at the same time)
>
>> - is it desirable to have a programmatic way to combine metaclasses?
>
> there seems to be general agreement that there should be no automatic
> combination of metaclasses, while many think that explicit combination
> should be possible.
>
> The only way to do that currently is by subclassing existing metaclasses.
> This is typically done by the user of the metaclass, not its author.
> It would be a nice thing if an author could do that. Petr argues s/he
> already can, by subclassing. Unfortunately, this is not so easy as this
> means that every user of such a class then has to use all inherited
> metaclasses.
>
> As an example: imagine you have a metaclass that mixes well with
> PyQt's metaclass. Then you could simply inherit from it. But this
> would mean that every user of you class also has to install PyQt,
> no matter if the code has anything to do with it.

But if you did have a merging mechanism, would it solve this problem?
To work properly, the merge mechanism in the new metaclass would have
to check that it's dealing with the PyQt metaclass, wouldn't it? How
whould that check work if the PyQt metaclass is not available?

> Now Thomas argues that this should simply not be done at all.
> I understand his argument well that multiple inheritance across
> project boundaries is probably a bad idea in most cases.
>
> But I have an example where it does make a lot of sense: ABCs.
> It would just be nice if one could implement and ABC which is a
> QObject. That's what ABCs are for. Sure, you can do this by
> registering the class, but this means you loose a lot like already
> predefined methods of an ABC. But maybe we can just
> special-case ABCs.

I don't quite get the example. Why would you want an ABC that is a QObject?

[...]
> Once we have one of my ideas implemented, I think we can
> implement PEP 422 in the standard library. Petr claims that this
> is just doing simple things with complex code. I disagree, I think
> that the current metaclass code in the python interpreter is already
> incredibly context, and just throwing more code at it is actually
> a very complex solution. Sure, on the python side all looks pretty,
> but only because we swept all complexity under the C carpet.
> But I won't fight for that point. What I am fighting for is to modify
> PEP 422 in a way that it can be backported to older versions of
> python and be put on PyPI. This means that __init_class__ should
> become __init_subclass__, meaning that only subclasses are
> initialized. (__init_class__ doesn't work properly, as super() won't
> work. This problem already exists in the zope version).

I've warmed up to this somewhat, once you mentioned the "registration
metaclass", where all subclasses of Registrar *except Registrar
itself* are collected somehow. Since the class name is not bound when
__init_class__ is called, it would not be straightforward to filter
out the base class.
I'm not sure how common this is, however ? use cases I can come up
with would work either way.

An option is that __init_class__ only being called on subclasses could
be a documented limitation of just the backport on PyPI.

> There is also a completely different way we could solve all those
> problems: abandon metaclasses for good, and replace them
> eventually with a PEP 422 like approach. This would mean that
> on the long run the following should be added:
>
> - a __new_subclass__ as a replacement for the metaclass __new__.
> This shows that it is necessary to operate on subclasses, otherwise
> one will have a hard time to create the class itself...

Well, super() cooperative inheritance won't help if you really need
different types of objects, rather than differently initialized
objects. This pretty much needs the full power of metaclasses.

> - __subclasscheck__ and __instancecheck__ must be called on the
> class, not the metaclass
>
> - sometimes metaclasses are used to add methods to the class only,
> that cannot be reached by an instance. A new @classonlymethod
> decorator should do that.

Such a decorator isn't terribly hard to write today.

> - I'm sure I forgot something.
>
> A pure python version of this proposal (it would go into C, for sure)
> can be found here:
> https://github.com/tecki/metaclasses
>
> Note that this is much more complicated as it would need to be if
> only the original PEP422 ideas were implemented.
>
> This was a long post.
>
> Enjoy.
>
> Greetings

From lkb.teichmann at gmail.com  Tue Feb 17 14:20:37 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Tue, 17 Feb 2015 14:20:37 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
Message-ID: <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>

Hi again,

OK, so if my ideas don't get through, would it at least be possible
to make PEP 422 have an __init_subclass__ only initializing the
subclasses of the class? That would have the great advantage of
being back-portable as a metaclass all the way down to python 2.7.
I guess this would help PEP 422 being actually used a lot.

Greetings

Martin

From marky1991 at gmail.com  Tue Feb 17 14:37:48 2015
From: marky1991 at gmail.com (Mark Young)
Date: Tue, 17 Feb 2015 08:37:48 -0500
Subject: [Python-ideas] Fwd: Re: PEP 485: A Function for testing
 approximate equality
In-Reply-To: <54E34055.1080608@mozilla.com>
References: <54E33E5A.7030907@mozilla.com> <54E34055.1080608@mozilla.com>
Message-ID: <CAG3cHaYXKrhqJbK_2zvCvZ3w7bJLfGBjtknmoKQ1is8Jtg3F8g@mail.gmail.com>

That's trading a teeny tiny bit of convenience for one particular use case
for the flexibility to choose any number as the tolerance. If that change
were made, I would have no means of specifying any sort of tolerance other
than powers of 10. I can easily see myself saying "This number needs to be
1 +- .5"
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/19886041/attachment.html>

From klahnakoski at mozilla.com  Tue Feb 17 16:18:16 2015
From: klahnakoski at mozilla.com (Kyle Lahnakoski)
Date: Tue, 17 Feb 2015 10:18:16 -0500
Subject: [Python-ideas] Fwd: Re: PEP 485: A Function for testing
 approximate equality
In-Reply-To: <CAG3cHaYXKrhqJbK_2zvCvZ3w7bJLfGBjtknmoKQ1is8Jtg3F8g@mail.gmail.com>
References: <54E33E5A.7030907@mozilla.com> <54E34055.1080608@mozilla.com>
 <CAG3cHaYXKrhqJbK_2zvCvZ3w7bJLfGBjtknmoKQ1is8Jtg3F8g@mail.gmail.com>
Message-ID: <54E35BB8.1020503@mozilla.com>


Mark,

May you elaborate on your use case?  I have not stumbled upon the need
for such a high tolerance.  I had imagined that the isclose()
algorithm's use cases were to compare floating point numbers; which I
thought to mean equal up to some number of significant digits (3 or
more).  Given such small epsilon, logarithmic granularity is good
enough.  I believe the majority of use cases for isclose() will involve
such small tolerances.

For larger tolerances, like you suggest, it is probably best be phrased
as `0.5 <= x < 1.5`, or `abs(x-1) <=0.5`.  Furthermore, the version of
`isclose()` I advocate need not have integer parameters `isclose(x, 1,
abs_tol = -log10(0.5))` is a legitimate, albeit ugly.

Thank you for you time discussing this.


On 2015-02-17 8:37 AM, Mark Young wrote:
> That's trading a teeny tiny bit of convenience for one particular use
> case for the flexibility to choose any number as the tolerance. If
> that change were made, I would have no means of specifying any sort of
> tolerance other than powers of 10. I can easily see myself saying
> "This number needs to be 1 +- .5"


From lkb.teichmann at gmail.com  Tue Feb 17 16:19:04 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Tue, 17 Feb 2015 16:19:04 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CA+=+wqAVQmnUNyTE3=Kcm3FCwW_QGJp-m66w_xX8pr3NFjw11w@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CAK9R32QpPHXRT2AiDaE=7KKeTtFYvc4oXcZWnusCykW2dGndgA@mail.gmail.com>
 <CA+=+wqAVQmnUNyTE3=Kcm3FCwW_QGJp-m66w_xX8pr3NFjw11w@mail.gmail.com>
Message-ID: <CAK9R32SwD-n2-jNChGB9EnjWP6mvMd0WvxtH4vYLhc=5uawRkg@mail.gmail.com>

> But if you did have a merging mechanism, would it solve this problem?
> To work properly, the merge mechanism in the new metaclass would have
> to check that it's dealing with the PyQt metaclass, wouldn't it? How
> whould that check work if the PyQt metaclass is not available?

Well, both would inherit from the same baseclass, let's call it
SimpleMixable, or AbstractMetaclass. This will now mix the two classes.
(In Nick's words: "You're asking us to assume covariant metaclasses in
a subset of cases, while retaining the assumption of contravariance
otherwise.") If PyQt doesn't want to inherit from AbstractMetaclass,
it can later be registered by

    AbstractMetaclass.register(PyQt4.QtMetaclass)

> I don't quite get the example. Why would you want an ABC that is a QObject?

well, imagine you have an ABC called WindowABC that defines
abstract methods that apply to a window, then you could write a class

    class Window(WindowABC, QWindow):
         # add some code here

which creates a concrete class implementing the WindowABC.
The documentation states about this: "An ABC can be subclassed directly,
and then acts as a mix-in class."

>> - a __new_subclass__ as a replacement for the metaclass __new__.
>> This shows that it is necessary to operate on subclasses, otherwise
>> one will have a hard time to create the class itself...
>
> Well, super() cooperative inheritance won't help if you really need
> different types of objects, rather than differently initialized
> objects. This pretty much needs the full power of metaclasses.

It doesn't. In the test code that I posted to github there is the example:

    class A(Object):
        @staticmethod
        def __new_subclass__(name, bases, dict):
            return 3
    class B(A):
        pass
    self.assertEqual(B, 3)

it already works.

> Such a decorator isn't terribly hard to write today.

Done.

From marky1991 at gmail.com  Tue Feb 17 19:18:07 2015
From: marky1991 at gmail.com (Mark Young)
Date: Tue, 17 Feb 2015 13:18:07 -0500
Subject: [Python-ideas] Fwd: Re: PEP 485: A Function for testing
 approximate equality
In-Reply-To: <54E35BB8.1020503@mozilla.com>
References: <54E33E5A.7030907@mozilla.com> <54E34055.1080608@mozilla.com>
 <CAG3cHaYXKrhqJbK_2zvCvZ3w7bJLfGBjtknmoKQ1is8Jtg3F8g@mail.gmail.com>
 <54E35BB8.1020503@mozilla.com>
Message-ID: <CAG3cHaZBPvpdvMUCr59YGwuxj-cNC04WETxhTwA_O_hvdk-0Sw@mail.gmail.com>

I don't have a real-world use case other than that sounds like something I
would do with a function called is_close. I don't think we get much of
anything (Except saving a few characters wen you want to define your
tolerance with respect to significant digits) by forcing the error
tolerances to be a power of ten. (And passing tolerance=math.log(X) is
awful api design)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/435a8ef1/attachment.html>

From guido at python.org  Tue Feb 17 19:27:29 2015
From: guido at python.org (Guido van Rossum)
Date: Tue, 17 Feb 2015 10:27:29 -0800
Subject: [Python-ideas] Fwd: Re: PEP 485: A Function for testing
 approximate equality
In-Reply-To: <CAG3cHaZBPvpdvMUCr59YGwuxj-cNC04WETxhTwA_O_hvdk-0Sw@mail.gmail.com>
References: <54E33E5A.7030907@mozilla.com> <54E34055.1080608@mozilla.com>
 <CAG3cHaYXKrhqJbK_2zvCvZ3w7bJLfGBjtknmoKQ1is8Jtg3F8g@mail.gmail.com>
 <54E35BB8.1020503@mozilla.com>
 <CAG3cHaZBPvpdvMUCr59YGwuxj-cNC04WETxhTwA_O_hvdk-0Sw@mail.gmail.com>
Message-ID: <CAP7+vJKrdznMTH7qL2WwJrj4BTagERJFo4HzPT9yWbT2qOPYkw@mail.gmail.com>

The period for bikeshedding is over.
On Feb 17, 2015 8:19 AM, "Mark Young" <marky1991 at gmail.com> wrote:

> I don't have a real-world use case other than that sounds like something I
> would do with a function called is_close. I don't think we get much of
> anything (Except saving a few characters wen you want to define your
> tolerance with respect to significant digits) by forcing the error
> tolerances to be a power of ten. (And passing tolerance=math.log(X) is
> awful api design)
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/8033b2fa/attachment.html>

From rymg19 at gmail.com  Tue Feb 17 22:04:16 2015
From: rymg19 at gmail.com (Ryan Gonzalez)
Date: Tue, 17 Feb 2015 15:04:16 -0600
Subject: [Python-ideas] Adding ziplib
In-Reply-To: <20150217042048.GZ24753@tonks>
References: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>
 <0676A880-8750-443A-A953-932E965EDB12@yahoo.com>
 <CAO41-mMqBm3u+-Lu3RDBdxcW1R3FrPBCUeS6erRzPwMWCf8P2A@mail.gmail.com>
 <20150217042048.GZ24753@tonks>
Message-ID: <CAO41-mNsxX3woGW4PHrBHjCkp2dO4E-6n-0Kyjt410Y_NZ=K1A@mail.gmail.com>

On Mon, Feb 16, 2015 at 10:20 PM, Florian Bruhin <me at the-compiler.org>
wrote:

> * Ryan Gonzalez <rymg19 at gmail.com> [2015-02-16 19:00:33 -0600]:
> > On Mon, Feb 16, 2015 at 5:32 PM, Andrew Barnert <abarnert at yahoo.com>
> wrote:
> >
> > > I love the idea (I've had to write my own custom wrapper around a
> subset
> > > of functionality for zip and tgz a few times, and I don't remember
> enjoying
> > > it...), and I especially like the idea of starting with the ABCs (which
> > > would hopefully encourage all those people writing cpio and 7z and
> whatever
> > > parsers to conform to the same API, even if they do so by wrapping LGPL
> > > libs like libarchive or even subprocessing out to command-line tools).
> > >
> > > But why "ziplib"? Why not something generic like "archive" that doesn't
> > > sound like it's specific to ZIP files?
> > >
> > >
> > This occurred to me a few moments ago. How about arclib?
>
> To me, that sounds like a library to handle ARC files:
>
> http://en.wikipedia.org/wiki/ARC_(file_format)
>
>
Darn...

Does anybody even use that format anymore?


> Florian
>
> --
> http://www.the-compiler.org | me at the-compiler.org (Mail/XMPP)
>    GPG: 916E B0C8 FD55 A072 | http://the-compiler.org/pubkey.asc
>          I love long mails! | http://email.is-not-s.ms/
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
Ryan
If anybody ever asks me why I prefer C++ to C, my answer will be simple:
"It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was
nul-terminated."
Personal reality distortion fields are immune to contradictory evidence. -
srean
Check out my website: http://kirbyfan64.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/8438d8df/attachment.html>

From greg at krypto.org  Tue Feb 17 22:17:10 2015
From: greg at krypto.org (Gregory P. Smith)
Date: Tue, 17 Feb 2015 21:17:10 +0000
Subject: [Python-ideas] Adding ziplib
References: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>
 <0676A880-8750-443A-A953-932E965EDB12@yahoo.com>
 <CAO41-mMqBm3u+-Lu3RDBdxcW1R3FrPBCUeS6erRzPwMWCf8P2A@mail.gmail.com>
 <20150217042048.GZ24753@tonks>
 <CAO41-mNsxX3woGW4PHrBHjCkp2dO4E-6n-0Kyjt410Y_NZ=K1A@mail.gmail.com>
Message-ID: <CAGE7PNKoonfetfHXdP1xiAk8LgQ_SRY=Mo6HnD0vWqd5F8LbkA@mail.gmail.com>

If bikeshedding about the name is all we're doing at this point I wouldn't
worry about the ancient BBS era .arc file format that nobody will write a
Python library to read. :)  I'd propose "archivers" as a module name,
though arclib is also nice.  If you want to really play homage to a good
little used archive format, call it arjlib. ;)

"The" problem with zipfile, and tarfile and similar today is that they are
all under maintained and have a variety of things inherent to the
underlying archive formats that are not common to all of them.  ie: zip
files have an end of archive central directory index.  tar files do not.
 rar files can be seen as a seemingly (sadly?) common successor to zip
files.  No doubt there are others (7z? cpio?).  zip files compress
individual files, tar files don't support compression as it is applied to
the whole archive after the fact.  The amount and type of directory
information available within each of these varies. And that doesn't even
touch multi file multi part archive support that some support for very
horrible hacky reasons.

coming up with common API for these with the key features needed by all is
interesting, doubly so for someone pedantic enough to get an implementation
of each correct, but it should be done as a third party library and should
ideally not use the stdlib zip or tar support at all. Do such libraries
exist for other languages? Any C++ that could be reused? Is there such a
thing as high quality code for this kind of task?

-gps

On Tue Feb 17 2015 at 1:05:04 PM Ryan Gonzalez <rymg19 at gmail.com> wrote:

> On Mon, Feb 16, 2015 at 10:20 PM, Florian Bruhin <me at the-compiler.org>
> wrote:
>
>> * Ryan Gonzalez <rymg19 at gmail.com> [2015-02-16 19:00:33 -0600]:
>> > On Mon, Feb 16, 2015 at 5:32 PM, Andrew Barnert <abarnert at yahoo.com>
>> wrote:
>> >
>> > > I love the idea (I've had to write my own custom wrapper around a
>> subset
>> > > of functionality for zip and tgz a few times, and I don't remember
>> enjoying
>> > > it...), and I especially like the idea of starting with the ABCs
>> (which
>> > > would hopefully encourage all those people writing cpio and 7z and
>> whatever
>> > > parsers to conform to the same API, even if they do so by wrapping
>> LGPL
>> > > libs like libarchive or even subprocessing out to command-line tools).
>> > >
>> > > But why "ziplib"? Why not something generic like "archive" that
>> doesn't
>> > > sound like it's specific to ZIP files?
>> > >
>> > >
>> > This occurred to me a few moments ago. How about arclib?
>>
>> To me, that sounds like a library to handle ARC files:
>>
>> http://en.wikipedia.org/wiki/ARC_(file_format)
>>
>>
> Darn...
>
> Does anybody even use that format anymore?
>
>
>> Florian
>>
>> --
>> http://www.the-compiler.org | me at the-compiler.org (Mail/XMPP)
>>    GPG: 916E B0C8 FD55 A072 | http://the-compiler.org/pubkey.asc
>>          I love long mails! | http://email.is-not-s.ms/
>>
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
> --
> Ryan
> If anybody ever asks me why I prefer C++ to C, my answer will be simple:
> "It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was
> nul-terminated."
> Personal reality distortion fields are immune to contradictory evidence. -
> srean
> Check out my website: http://kirbyfan64.github.io/
>  _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/90ab7fb2/attachment-0001.html>

From p.f.moore at gmail.com  Tue Feb 17 22:54:37 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Tue, 17 Feb 2015 21:54:37 +0000
Subject: [Python-ideas] Adding ziplib
In-Reply-To: <CAGE7PNKoonfetfHXdP1xiAk8LgQ_SRY=Mo6HnD0vWqd5F8LbkA@mail.gmail.com>
References: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>
 <0676A880-8750-443A-A953-932E965EDB12@yahoo.com>
 <CAO41-mMqBm3u+-Lu3RDBdxcW1R3FrPBCUeS6erRzPwMWCf8P2A@mail.gmail.com>
 <20150217042048.GZ24753@tonks>
 <CAO41-mNsxX3woGW4PHrBHjCkp2dO4E-6n-0Kyjt410Y_NZ=K1A@mail.gmail.com>
 <CAGE7PNKoonfetfHXdP1xiAk8LgQ_SRY=Mo6HnD0vWqd5F8LbkA@mail.gmail.com>
Message-ID: <CACac1F89U3NMpPPmCK7HDx8fUhiH9cgVEFTO=vOpA9fB3uSSpA@mail.gmail.com>

On 17 February 2015 at 21:17, Gregory P. Smith <greg at krypto.org> wrote:
> coming up with common API for these with the key features needed by all is
> interesting, doubly so for someone pedantic enough to get an implementation
> of each correct, but it should be done as a third party library and should
> ideally not use the stdlib zip or tar support at all. Do such libraries
> exist for other languages? Any C++ that could be reused? Is there such a
> thing as high quality code for this kind of task?

In the C world, I think libarchive is the best known "handle all the
formats" library. I've only used the command-line bsdtar application
that's built on it, though, so I don't know what the library API is
like.

Paul

From ncoghlan at gmail.com  Tue Feb 17 23:15:03 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 18 Feb 2015 08:15:03 +1000
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAK9R32SaL7+RLJwKbbWBinteMe68v=zL-b=js1dv97zm8-3i4A@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32SaL7+RLJwKbbWBinteMe68v=zL-b=js1dv97zm8-3i4A@mail.gmail.com>
Message-ID: <CADiSq7ca1NdotvDk0OMc2LQDn64WAUoRfz64H=c1gpPsnvsvdg@mail.gmail.com>

On 17 Feb 2015 23:13, "Martin Teichmann" <lkb.teichmann at gmail.com> wrote:
>
> > You're asking us to assume covariant metaclasses in a subset of cases,
> > while retaining the assumption of contravariance otherwise. That isn't
> > going to happen - we expect type designers to declare metaclass
> > covariance by creating an explicit subclass and using it where
> > appropriate..
>
> How do you solve my example with ABCs? That means if a
> QObject (from PyQt) tries to implement an ABC by inheritance,
> then one first has to create a new metaclass inheriting from
> ABCMeta and whatever the PyQt metaclass is.
>
> Sure, one could register the class to the ABC, but then you
> loose the option of inheriting stuff from the ABC. One could
> also write an adaptor - so using composition. But to me that
> defeats the point of the concept of ABCs: that you can make
> anything fit appropriately. In general, that smells to me like
> a lot of boilerplate code.

If you're doing something incredibly hard and incredibly rare, the code you
have to write to tell the interpreter how to handle it is not really
boilerplate - the "boilerplate" appellation is generally reserved for
verbose solutions to *common* problems. This is not a common problem -
using a custom metaclass at all is rare (as it should be), and combining
them even more so.

If you specifically want to write QtABCs, then the appropriate solution is
to either write your own combining metaclass, or else to go to whichever Qt
wrapper project you're using and propose adding it there.

That's a readable and relatively simple approach that will make sense to
anyone that understands Python metaclasses, and will also work in all
existing versions of Python that supports ABCs.

It is *not* a good enough reason to try to teach the interpreter how to
automatically combine metaclasses. Even if you could get it to work
reliably (which seems unlikely), it would be impossible to get it to a
point where a *human* (even one who already understood metaclasses in
general) could look at the code and easily tell you what was going on
behind the scenes.

>
> > PEP 422 bypasses the default assumption of metaclass contravariance by
> > operating directly on type instances, without touching any other part
> > of the metaclass machinery.
>
> So there is a problem with the metaclass machinery, and we
> solve that by making one certain special case work? When is the
> next special case going to show up?

Probably never. We've had the metaclass machinery in place for over a
decade, and the "run code at subclass creation time" pattern is the only
particularly common metaclass idiom I am aware of where we would benefit
from a simpler alternative that people can use without first needing to
understand how metaclasses work.

(I'm aware that "common" is a relative term when we're discussing
metaclasses, but establishing or otherwise ensuring class invariants in
subclasses really is one of their main current use cases)

> Will it also be included
> into type?

Unfortunately, you're making the same mistake the numeric folks did before
finally asking specifically for a matrix multiplication operator: trying to
teach the interpreter to solve a hard general problem (in their case,
defining custom infix operators), rather than doing pattern extraction on
known use cases to create an easier to use  piece of syntactic sugar. When
the scientific & data analysis community realised that the general infix
operator case didn't *need* solving, and that all they really wanted was a
matrix multiplication operator, that's what they asked for and that's what
they got for Python 3.5+.

Changes to the language syntax and semantics should make it clear to
*readers* that a particular idiom is in use, rather than helping to hide it
from them.

PEP 422 achieves that - as soon as you see "__init_class__" you know that
inheriting from that base class will just run code at subclass
initialisation time, rather than potentially making arbitrary changes to
the subclass behaviour the way a custom metaclass can.

By contrast, implicitly combining metaclasses at runtime does nothing to
make the code easier to read - it only makes it (arguably) easier to write
by letting the developer omit the currently explicit combining metaclass.
That means that not only does the interpreter have to be taught how to do
implicit metaclass combination but future maintainers have to learn to do
it in their heads, making the code *harder* to read, rather than easier.

Regards,
Nick.

>
> Greetings
>
> Martin
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150218/4460683e/attachment.html>

From anthony at xtfx.me  Tue Feb 17 23:22:29 2015
From: anthony at xtfx.me (C Anthony Risinger)
Date: Tue, 17 Feb 2015 16:22:29 -0600
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <mbrbet$4es$1@ger.gmane.org>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
 <f5693415-0e68-4212-8d03-f07e2a53d65a@googlegroups.com>
 <mbrbet$4es$1@ger.gmane.org>
Message-ID: <CAGAVQTGxGWpHfp-p22=1+MiukbHsoRxjBNVfr08JbzabKSfV4A@mail.gmail.com>

On Sun, Feb 15, 2015 at 5:51 PM, Ron Adam <ron3200 at gmail.com> wrote:
>
>
> On 02/15/2015 05:14 PM, Neil Girdhar wrote:
>
>> I'm also +1, I agree with Donald that "|" makes more sense to me than "+"
>> if only for consistency with sets.  In mathematics a mapping is a set of
>> pairs (preimage, postimage), and we are taking the union of these sets.
>>
>
> An option might be to allow a union "|" if both the (key, value) pairs
> that match in each are the same.  Other wise, raise an exception.


I've only loosely followed this thread, so I apologize if examples were
already linked or things already said.

I want to reinforce that set-like operations/parallels are a million times
better than + ... esp. considering dict.viewkeys is pretty much a set.
please please don't use + !!!

Now, a link to an implementation we have used for several years with
success, and the teams here seem to think it makes sense:

https://github.com/xtfxme/xacto/blob/master/xacto/__init__.py#L140

(dont worry about the class decorator/etc, you can just as easily take that
class and subclass dict to get a type supporting all set operations)

The premise taken here is that dicts ARE sets that simply happen to have
values associated with them... hence all operations are against keys ONLY.
The fact that values tag along is irrelevant. I toyed with supporting
values too but it makes things ambiguous (is that a key/val pair or a tuple
key?)

So, given:

>>> d1 = fancy_dict(a=1, b=2, c=3)
>>> d2 = fancy_dict(c=4, d=5, e=6)

The implementation allows you to do such things as:

# union (first key winning here... IIRC this is how sets actually work)
>>> d1 | d2
{'a': 1, 'b': 2, 'c': 3, 'd': 5, 'e': 6}

# intersection (again, first key wins)
>>> d1 & d2
{'c': 3}

# difference (use ANY iterable as a filter!)
>>> d1 - ('a', 'b')
{'c': 3}

# symmetric difference
>>> d1 ^ d2
{'a': 1, 'b': 2, 'd': 5, 'e': 6}

All of the operations support arbitrary iterables as the RHS! This is NICE.

ASIDE: `d1 | d2` is NOT the same as .update() here (though maybe it should
be)... I believe this stems from the fact that (IIRC :) a normal set() WILL
NOT replace an entry it already has with a new one, if they hash() the
same. IOW, a set will prefer the old object to the new one, and since the
values are tied to the keys, the old value persists.

Something to think about.

-- 

C Anthony
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/70ad36d4/attachment-0001.html>

From abarnert at yahoo.com  Tue Feb 17 23:22:54 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Tue, 17 Feb 2015 14:22:54 -0800
Subject: [Python-ideas] Adding ziplib
In-Reply-To: <CAGE7PNKoonfetfHXdP1xiAk8LgQ_SRY=Mo6HnD0vWqd5F8LbkA@mail.gmail.com>
References: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>
 <0676A880-8750-443A-A953-932E965EDB12@yahoo.com>
 <CAO41-mMqBm3u+-Lu3RDBdxcW1R3FrPBCUeS6erRzPwMWCf8P2A@mail.gmail.com>
 <20150217042048.GZ24753@tonks>
 <CAO41-mNsxX3woGW4PHrBHjCkp2dO4E-6n-0Kyjt410Y_NZ=K1A@mail.gmail.com>
 <CAGE7PNKoonfetfHXdP1xiAk8LgQ_SRY=Mo6HnD0vWqd5F8LbkA@mail.gmail.com>
Message-ID: <221599BA-C999-48F8-A975-C65FB0F3A08D@yahoo.com>

On Feb 17, 2015, at 13:17, "Gregory P. Smith" <greg at krypto.org> wrote:

> If bikeshedding about the name is all we're doing at this point I wouldn't worry about the ancient BBS era .arc file format that nobody will write a Python library to read. :)  I'd propose "archivers" as a module name, though arclib is also nice.  If you want to really play homage to a good little used archive format, call it arjlib. ;)
> 
> "The" problem with zipfile, and tarfile and similar today is that they are all under maintained and have a variety of things inherent to the underlying archive formats that are not common to all of them.  ie: zip files have an end of archive central directory index.  tar files do not.  rar files can be seen as a seemingly (sadly?) common successor to zip files.  No doubt there are others (7z? cpio?).  zip files compress individual files, tar files don't support compression as it is applied to the whole archive after the fact.  The amount and type of directory information available within each of these varies. And that doesn't even touch multi file multi part archive support that some support for very horrible hacky reasons.
> 
> coming up with common API for these with the key features needed by all is interesting, doubly so for someone pedantic enough to get an implementation of each correct, but it should be done as a third party library and should ideally not use the stdlib zip or tar support at all. Do such libraries exist for other languages? Any C++ that could be reused? Is there such a thing as high quality code for this kind of task?

I already mentioned BSD libarchive. (It's in C, not C++, but why would you want C++ for this?) It does pretty much everything you're asking for, and more (more formats, like ISO disk images; format auto-detection by name or magic; etc.).

And python-libarchive seems like a pretty up-to-date and well-maintained wrapper. Plus, as I mentioned, it has compatibility wrappers to offer a subset of its functionality with the zipfile and tarfile APIs, making it easy to modify legacy code--e.g., the original problem that started this thread, modifying an app to use zipfile instead of tarfile, would probably only require a couple lines of code (import the tarfile compat library, change the filenames).

But the question isn't whether someone can build an ultimate archive library; something that's self-contained, and easy to build and distribute, but only handles the most important types and the basic level of functionality the current stdlib provides (but in a more consistent way) would still be very useful.

From abarnert at yahoo.com  Tue Feb 17 23:30:46 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Tue, 17 Feb 2015 14:30:46 -0800
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CADiSq7ca1NdotvDk0OMc2LQDn64WAUoRfz64H=c1gpPsnvsvdg@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32SaL7+RLJwKbbWBinteMe68v=zL-b=js1dv97zm8-3i4A@mail.gmail.com>
 <CADiSq7ca1NdotvDk0OMc2LQDn64WAUoRfz64H=c1gpPsnvsvdg@mail.gmail.com>
Message-ID: <961518CF-9039-4695-BBAC-9F828EE41E7C@yahoo.com>

On Feb 17, 2015, at 14:15, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On 17 Feb 2015 23:13, "Martin Teichmann" <lkb.teichmann at gmail.com> wrote:
> >
> > > You're asking us to assume covariant metaclasses in a subset of cases,
> > > while retaining the assumption of contravariance otherwise. That isn't
> > > going to happen - we expect type designers to declare metaclass
> > > covariance by creating an explicit subclass and using it where
> > > appropriate..
> >
> > How do you solve my example with ABCs? That means if a
> > QObject (from PyQt) tries to implement an ABC by inheritance,
> > then one first has to create a new metaclass inheriting from
> > ABCMeta and whatever the PyQt metaclass is.
> >
> > Sure, one could register the class to the ABC, but then you
> > loose the option of inheriting stuff from the ABC. One could
> > also write an adaptor - so using composition. But to me that
> > defeats the point of the concept of ABCs: that you can make
> > anything fit appropriately. In general, that smells to me like
> > a lot of boilerplate code.
> 
> If you're doing something incredibly hard and incredibly rare, the code you have to write to tell the interpreter how to handle it is not really boilerplate - the "boilerplate" appellation is generally reserved for verbose solutions to *common* problems. This is not a common problem - using a custom metaclass at all is rare (as it should be), and combining them even more so.
> 
> If you specifically want to write QtABCs, then the appropriate solution is to either write your own combining metaclass, or else to go to whichever Qt wrapper project you're using and propose adding it there.
> 
> That's a readable and relatively simple approach that will make sense to anyone that understands Python metaclasses, and will also work in all existing versions of Python that supports ABCs.
> 
> It is *not* a good enough reason to try to teach the interpreter how to automatically combine metaclasses. Even if you could get it to work reliably (which seems unlikely), it would be impossible to get it to a point where a *human* (even one who already understood metaclasses in general) could look at the code and easily tell you what was going on behind the scenes.
> 
> >
> > > PEP 422 bypasses the default assumption of metaclass contravariance by
> > > operating directly on type instances, without touching any other part
> > > of the metaclass machinery.
> >
> > So there is a problem with the metaclass machinery, and we
> > solve that by making one certain special case work? When is the
> > next special case going to show up?
> 
> Probably never. We've had the metaclass machinery in place for over a decade, and the "run code at subclass creation time" pattern is the only particularly common metaclass idiom I am aware of where we would benefit from a simpler alternative that people can use without first needing to understand how metaclasses work.
> 
Well, this is the second such pattern discovered in 14 years. (Remember that Py3k added __prepare__ to make it possible to simplify and clean up the machinery. And now there's this.) So, maybe in another 7 years or so, someone will come up with another such pattern.

But I think that's fine. If Python 3.11 can benefit from another small change to type that nobody's thought of yet, I think there's a good chance you'd still want to make that change even if it was possible to work around thanks to a more powerful metaclass system or not. (After all, you want PEP 422 even though it's already possible to work around that need with the existing metaclass system.)

So I think you're right that the argument "We don't want to keep modifying type" doesn't really work here.
> (I'm aware that "common" is a relative term when we're discussing metaclasses, but establishing or otherwise ensuring class invariants in subclasses really is one of their main current use cases)
> 
> > Will it also be included
> > into type?
> 
> Unfortunately, you're making the same mistake the numeric folks did before finally asking specifically for a matrix multiplication operator: trying to teach the interpreter to solve a hard general problem (in their case, defining custom infix operators), rather than doing pattern extraction on known use cases to create an easier to use  piece of syntactic sugar. When the scientific & data analysis community realised that the general infix operator case didn't *need* solving, and that all they really wanted was a matrix multiplication operator, that's what they asked for and that's what they got for Python 3.5+.
> 
> Changes to the language syntax and semantics should make it clear to *readers* that a particular idiom is in use, rather than helping to hide it from them.
> 
> PEP 422 achieves that - as soon as you see "__init_class__" you know that inheriting from that base class will just run code at subclass initialisation time, rather than potentially making arbitrary changes to the subclass behaviour the way a custom metaclass can.
> 
> By contrast, implicitly combining metaclasses at runtime does nothing to make the code easier to read - it only makes it (arguably) easier to write by letting the developer omit the currently explicit combining metaclass. That means that not only does the interpreter have to be taught how to do implicit metaclass combination but future maintainers have to learn to do it in their heads, making the code *harder* to read, rather than easier.
> 
> Regards,
> Nick.
> 
> >
> > Greetings
> >
> > Martin
> > _______________________________________________
> > Python-ideas mailing list
> > Python-ideas at python.org
> > https://mail.python.org/mailman/listinfo/python-ideas
> > Code of Conduct: http://python.org/psf/codeofconduct/
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/09acc1e4/attachment.html>

From ncoghlan at gmail.com  Tue Feb 17 23:33:27 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 18 Feb 2015 08:33:27 +1000
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>
Message-ID: <CADiSq7dQAEfGiUY8wznd+p5KxBn6Fm3MXFTjDAddFvk55iUaGQ@mail.gmail.com>

On 17 Feb 2015 23:29, "Martin Teichmann" <lkb.teichmann at gmail.com> wrote:
>
> Hi again,
>
> OK, so if my ideas don't get through, would it at least be possible
> to make PEP 422 have an __init_subclass__ only initializing the
> subclasses of the class? That would have the great advantage of
> being back-portable as a metaclass all the way down to python 2.7.
> I guess this would help PEP 422 being actually used a lot.

Zope already implements __classinit__ via a metaclass, so I'm not clear on
how this addition would help with a backport of the __init_class__ spelling
to Python 2.

PEP 422 largely proposes standardisation of an existing custom metaclass
capability (using a different method name to avoid conflicting) for
improved ease of use, discoverability and compatibility, rather than making
any fundamental changes to the type system design.

I agree a clean idiom for detecting whether you're in the base class or not
might be useful, but even without specific support in the PEP you'll at
least be able to do that either by inspecting the MRO, or else by assuming
that the class name not being bound yet means you're still in the process
of creating it.

Regards,
Nick.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150218/b62f6336/attachment-0001.html>

From anthony at xtfx.me  Tue Feb 17 23:38:14 2015
From: anthony at xtfx.me (C Anthony Risinger)
Date: Tue, 17 Feb 2015 16:38:14 -0600
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAGAVQTGxGWpHfp-p22=1+MiukbHsoRxjBNVfr08JbzabKSfV4A@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
 <f5693415-0e68-4212-8d03-f07e2a53d65a@googlegroups.com>
 <mbrbet$4es$1@ger.gmane.org>
 <CAGAVQTGxGWpHfp-p22=1+MiukbHsoRxjBNVfr08JbzabKSfV4A@mail.gmail.com>
Message-ID: <CAGAVQTEPfYUmVw1L8EMSE8BXEyydcxJp790yFjhasgpgx5rUBw@mail.gmail.com>

On Tue, Feb 17, 2015 at 4:22 PM, C Anthony Risinger <anthony at xtfx.me> wrote:
>
>
> All of the operations support arbitrary iterables as the RHS! This is NICE.
>
> ASIDE: `d1 | d2` is NOT the same as .update() here (though maybe it should
> be)... I believe this stems from the fact that (IIRC :) a normal set() WILL
> NOT replace an entry it already has with a new one, if they hash() the
> same. IOW, a set will prefer the old object to the new one, and since the
> values are tied to the keys, the old value persists.
>
> Something to think about.
>

Forgot to mention the impl supports arbitrary iterables as the LHS as well,
with values simply becoming None. Whether RHS or LHS, a type(fancy_dict) is
always returned.

I really really want to reiterate the fact that values are IGNORED... if
you just treat a dict like a set with values hanging off the key, the
implementation stays perfectly consistent with sets, and does the same
thing no matter what it's operating with.  Just pretend like the values
aren't there and what you get in the end makes a lot of sense.

-- 

C Anthony
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/cc9a04c7/attachment.html>

From anthony at xtfx.me  Wed Feb 18 04:30:30 2015
From: anthony at xtfx.me (C Anthony Risinger)
Date: Tue, 17 Feb 2015 21:30:30 -0600
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAGAVQTEPfYUmVw1L8EMSE8BXEyydcxJp790yFjhasgpgx5rUBw@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
 <f5693415-0e68-4212-8d03-f07e2a53d65a@googlegroups.com>
 <mbrbet$4es$1@ger.gmane.org>
 <CAGAVQTGxGWpHfp-p22=1+MiukbHsoRxjBNVfr08JbzabKSfV4A@mail.gmail.com>
 <CAGAVQTEPfYUmVw1L8EMSE8BXEyydcxJp790yFjhasgpgx5rUBw@mail.gmail.com>
Message-ID: <CAGAVQTGEvVL43VG5zEhuRK+1YzcmLovTN4n64tRS-DYa35VxuA@mail.gmail.com>

On Tue, Feb 17, 2015 at 4:38 PM, C Anthony Risinger <anthony at xtfx.me> wrote:

> On Tue, Feb 17, 2015 at 4:22 PM, C Anthony Risinger <anthony at xtfx.me>
> wrote:
>>
>>
>> All of the operations support arbitrary iterables as the RHS! This is
>> NICE.
>>
>> ASIDE: `d1 | d2` is NOT the same as .update() here (though maybe it
>> should be)... I believe this stems from the fact that (IIRC :) a normal
>> set() WILL NOT replace an entry it already has with a new one, if they
>> hash() the same. IOW, a set will prefer the old object to the new one, and
>> since the values are tied to the keys, the old value persists.
>>
>> Something to think about.
>>
>
> Forgot to mention the impl supports arbitrary iterables as the LHS as
> well, with values simply becoming None. Whether RHS or LHS, a
> type(fancy_dict) is always returned.
>
> I really really want to reiterate the fact that values are IGNORED... if
> you just treat a dict like a set with values hanging off the key, the
> implementation stays perfectly consistent with sets, and does the same
> thing no matter what it's operating with.  Just pretend like the values
> aren't there and what you get in the end makes a lot of sense.
>

Trying not to go OT for this thread (or list), run the following on python2
or python3 (http://pastie.org/9958218):



    import itertools


    class thing(str):

        n = itertools.count()

        def __new__(cls, *args, **kwds):
            self = super(thing, cls).__new__(cls, *args, **kwds)
            self.n = next(self.n)
            print('    LIFE! %s' % self.n)
            return self

        def __del__(self):
            print('    WHYY? %s' % self.n)


    print('---------------( via constructor )')
    # bug? literal does something different than constructor?
    set_of_things = set([thing('hi'), thing('hi'), thing('hi')])

    # reset so it's easy to see the difference
    thing.n = itertools.count()

    print('---------------( via literal )')
    # use a different identifier else GC on overwrite
    unused_set_of_things = {thing('hi'), thing('hi'), thing('hi')}

    print('---------------( adding another )')
    set_of_things.add(thing('hi'))

    print('---------------( done )')



you will see something like this:



---------------( via constructor )
    LIFE! 0
    LIFE! 1
    LIFE! 2
    WHYY? 2
    WHYY? 1
---------------( via literal )
    LIFE! 0
    LIFE! 1
    LIFE! 2
    WHYY? 1
    WHYY? 0
---------------( adding another )
    LIFE! 3
    WHYY? 3
---------------( done )
    WHYY? 0
    WHYY? 2



as shown, adding to a set discards the *new* object, not the old (and as an
aside, seems to be a little buggy during construction, else the final 2
would have the same id!)... should this happen?

I'm not versed enough in the math behind it to know if it's expected or
not, but as it stands, to remain compatible with sets, `d1 | d2` should
behave like it does in my code (prefer the first, not the last).  I kinda
like this, because it makes dict.__or__ a *companion* to .update(), not a
replacement (since update prefers the last).

-- 

C Anthony
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/1ee4b3f1/attachment.html>

From stephen at xemacs.org  Wed Feb 18 05:08:13 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Wed, 18 Feb 2015 13:08:13 +0900
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAGAVQTGEvVL43VG5zEhuRK+1YzcmLovTN4n64tRS-DYa35VxuA@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
 <f5693415-0e68-4212-8d03-f07e2a53d65a@googlegroups.com>
 <mbrbet$4es$1@ger.gmane.org>
 <CAGAVQTGxGWpHfp-p22=1+MiukbHsoRxjBNVfr08JbzabKSfV4A@mail.gmail.com>
 <CAGAVQTEPfYUmVw1L8EMSE8BXEyydcxJp790yFjhasgpgx5rUBw@mail.gmail.com>
 <CAGAVQTGEvVL43VG5zEhuRK+1YzcmLovTN4n64tRS-DYa35VxuA@mail.gmail.com>
Message-ID: <87lhjvhnf6.fsf@uwakimon.sk.tsukuba.ac.jp>

C Anthony Risinger writes:

 > I'm not versed enough in the math behind it to know if it's expected or
 > not, but as it stands, to remain compatible with sets, `d1 | d2` should
 > behave like it does in my code (prefer the first, not the last).  I kinda
 > like this, because it makes dict.__or__ a *companion* to .update(), not a
 > replacement (since update prefers the last).

But this is exactly the opposite of what the people who advocate use
of an operator want.  As far as I can see, all of them want update
semantics, because that's the more common use case where the current
idioms feel burdensome.


From abarnert at yahoo.com  Wed Feb 18 05:38:39 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Tue, 17 Feb 2015 20:38:39 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAGAVQTGEvVL43VG5zEhuRK+1YzcmLovTN4n64tRS-DYa35VxuA@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
 <f5693415-0e68-4212-8d03-f07e2a53d65a@googlegroups.com>
 <mbrbet$4es$1@ger.gmane.org>
 <CAGAVQTGxGWpHfp-p22=1+MiukbHsoRxjBNVfr08JbzabKSfV4A@mail.gmail.com>
 <CAGAVQTEPfYUmVw1L8EMSE8BXEyydcxJp790yFjhasgpgx5rUBw@mail.gmail.com>
 <CAGAVQTGEvVL43VG5zEhuRK+1YzcmLovTN4n64tRS-DYa35VxuA@mail.gmail.com>
Message-ID: <B3E27F2D-D89E-428E-BE29-A544D247D583@yahoo.com>

On Feb 17, 2015, at 19:30, C Anthony Risinger <anthony at xtfx.me> wrote:

> On Tue, Feb 17, 2015 at 4:38 PM, C Anthony Risinger <anthony at xtfx.me> wrote:
>> On Tue, Feb 17, 2015 at 4:22 PM, C Anthony Risinger <anthony at xtfx.me> wrote:
>>> 
>>> 
>>> All of the operations support arbitrary iterables as the RHS! This is NICE.
>>> 
>>> ASIDE: `d1 | d2` is NOT the same as .update() here (though maybe it should be)... I believe this stems from the fact that (IIRC :) a normal set() WILL NOT replace an entry it already has with a new one, if they hash() the same. IOW, a set will prefer the old object to the new one, and since the values are tied to the keys, the old value persists.

I'm pretty sure that's just an implementation artifact of CPython, not something the language requires. If an implementation wanted to build sets directly on dicts and handle s.add(x) as s._dict[x] = None, or build them on some other language's set type that has a replace method instead of an add-if-new method, that would be perfectly valid.

And the whole point of sets is to deal with objects that are interchangeable if equal. If you write {1, 2} U {2, 3}, it's meaningless to ask which 2 ends up in the result; 2 is 2. The fact that Python lets you tack on other information to distinguish two 2's means that an implementation has to make a choice there, but it doesn't proscribe any choice.

>>> 
>>> Something to think about.
>> 
>> Forgot to mention the impl supports arbitrary iterables as the LHS as well, with values simply becoming None. Whether RHS or LHS, a type(fancy_dict) is always returned.
>> 
>> I really really want to reiterate the fact that values are IGNORED... if you just treat a dict like a set with values hanging off the key, the implementation stays perfectly consistent with sets, and does the same thing no matter what it's operating with.  Just pretend like the values aren't there and what you get in the end makes a lot of sense.
> 
> 
> Trying not to go OT for this thread (or list), run the following on python2 or python3 (http://pastie.org/9958218):
> 
> 
> 
>     import itertools
>     
>     
>     class thing(str):
>     
>         n = itertools.count()
>     
>         def __new__(cls, *args, **kwds):
>             self = super(thing, cls).__new__(cls, *args, **kwds)
>             self.n = next(self.n)
>             print('    LIFE! %s' % self.n)
>             return self
>     
>         def __del__(self):
>             print('    WHYY? %s' % self.n)
>     
>     
>     print('---------------( via constructor )')
>     # bug? literal does something different than constructor?

Why should they do the same thing? One is taking an iterable, and it has to process the elements in iteration order; the other is taking whatever the compiler found convenient to store, in whatever order it chose.

If you want to see how CPython in particular compiles the literal, the dis module will show you: it pushes each element on the stack from left to right, then it does a BUILD_SET with the count. You can find the code for BUILD_SET in the main loop in ceval.c, but basically it just pop each element and adds it; since they're in reverse order on the stack, the rightmost one gets added first. (If you're wondering how BUILD_LIST avoids building lists backward, it counts down from the size, inserting each element at the count's index. So, it builds the list backward out of backward elements.)

>     set_of_things = set([thing('hi'), thing('hi'), thing('hi')])
>     
>     # reset so it's easy to see the difference
>     thing.n = itertools.count()
>     
>     print('---------------( via literal )')
>     # use a different identifier else GC on overwrite
>     unused_set_of_things = {thing('hi'), thing('hi'), thing('hi')}
>     
>     print('---------------( adding another )')
>     set_of_things.add(thing('hi'))
>     
>     print('---------------( done )')
> 
> 
> 
> you will see something like this:
> 
> 
> 
> ---------------( via constructor )
>     LIFE! 0
>     LIFE! 1
>     LIFE! 2
>     WHYY? 2
>     WHYY? 1
> ---------------( via literal )
>     LIFE! 0
>     LIFE! 1
>     LIFE! 2
>     WHYY? 1
>     WHYY? 0
> ---------------( adding another )
>     LIFE! 3
>     WHYY? 3
> ---------------( done )
>     WHYY? 0
>     WHYY? 2
> 
> 
> 
> as shown, adding to a set discards the *new* object, not the old (and as an aside, seems to be a little buggy during construction, else the final 2 would have the same id!)... should this happen?
> 
> I'm not versed enough in the math behind it to know if it's expected or not, but as it stands, to remain compatible with sets, `d1 | d2` should behave like it does in my code (prefer the first, not the last).  I kinda like this, because it makes dict.__or__ a *companion* to .update(), not a replacement (since update prefers the last).
> 
> -- 
> 
> C Anthony
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/082a5aec/attachment-0001.html>

From chris.barker at noaa.gov  Wed Feb 18 05:48:33 2015
From: chris.barker at noaa.gov (Chris Barker)
Date: Tue, 17 Feb 2015 20:48:33 -0800
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAGAVQTGxGWpHfp-p22=1+MiukbHsoRxjBNVfr08JbzabKSfV4A@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
 <f5693415-0e68-4212-8d03-f07e2a53d65a@googlegroups.com>
 <mbrbet$4es$1@ger.gmane.org>
 <CAGAVQTGxGWpHfp-p22=1+MiukbHsoRxjBNVfr08JbzabKSfV4A@mail.gmail.com>
Message-ID: <CALGmxE+SxCiBUm4rUCAOa_vhGO8EfqEVpOkmXM+BBYk--2DDXw@mail.gmail.com>

On Tue, Feb 17, 2015 at 2:22 PM, C Anthony Risinger <anthony at xtfx.me> wrote:

> The premise taken here is that dicts ARE sets that simply happen to have
> values associated with them... hence all operations are against keys ONLY.
> The fact that values tag along is irrelevant.
>

but dicts DO have values, and that's their entire reason for existence --
if the values were irrelevant, you'd use a set....

And because values are important -- there is no such thing as "works the
same as a set". For instance:

# union (first key winning here... IIRC this is how sets actually work)
> >>> d1 | d2
> {'a': 1, 'b': 2, 'c': 3, 'd': 5, 'e': 6}
>

The fact that sets may, under the hood, keep the first key is an
implementation detail  -- by definition the first and second duplicate keys
are the same. So there is no guidance here whatsoever as to what to do with
with unioning dicts.

Oh, except .update() provides a precedent that has proven to be useful.
Keeping the second value sure feels more natural to me.

Oh, and I'd still prefer + I don't think most users think of merging two
dicts together as a boolean logical operation....

All of the operations support arbitrary iterables as the RHS! This is NICE.
>

not so sure about that -- again, dicts have values, that's why we use them.
Maybe defaultdict could work this way, though.

-Chris


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/3d8eed58/attachment.html>

From anthony at xtfx.me  Wed Feb 18 05:50:27 2015
From: anthony at xtfx.me (C Anthony Risinger)
Date: Tue, 17 Feb 2015 22:50:27 -0600
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <87lhjvhnf6.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
 <f5693415-0e68-4212-8d03-f07e2a53d65a@googlegroups.com>
 <mbrbet$4es$1@ger.gmane.org>
 <CAGAVQTGxGWpHfp-p22=1+MiukbHsoRxjBNVfr08JbzabKSfV4A@mail.gmail.com>
 <CAGAVQTEPfYUmVw1L8EMSE8BXEyydcxJp790yFjhasgpgx5rUBw@mail.gmail.com>
 <CAGAVQTGEvVL43VG5zEhuRK+1YzcmLovTN4n64tRS-DYa35VxuA@mail.gmail.com>
 <87lhjvhnf6.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <CAGAVQTGoQko5XW8Sw26EhEv2T7eaRdXwiK+b6vBPd5ki0U=19Q@mail.gmail.com>

On Tue, Feb 17, 2015 at 10:08 PM, Stephen J. Turnbull <stephen at xemacs.org>
wrote:

> C Anthony Risinger writes:
>
>  > I'm not versed enough in the math behind it to know if it's expected or
>  > not, but as it stands, to remain compatible with sets, `d1 | d2` should
>  > behave like it does in my code (prefer the first, not the last).  I
> kinda
>  > like this, because it makes dict.__or__ a *companion* to .update(), not
> a
>  > replacement (since update prefers the last).
>
> But this is exactly the opposite of what the people who advocate use
> of an operator want.  As far as I can see, all of them want update
> semantics, because that's the more common use case where the current
> idioms feel burdensome.
>

True... maybe that really is a good case for the + then, as something like
.update().

Personally, I think making dict be more set-like is way more
interesting/useful, because of the *filtering* capabilities:

# drop keys
d1 -= (keys_ignored, ...)

# apply [inverted] mask
d1 &= (keys_required, ...)
d1 ^= (keys_forbidden, ...)

__or__ would still work like dict.viewkeys.__or__, and behaves like a bulk
.setdefault() which is another neat property:

# same as looping d2 calling d1.setdefault(...)
d1 |= d2

Using + for the .update(...) case seems nice too :)

-- 

C Anthony
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150217/4ed93b1e/attachment.html>

From mistersheik at gmail.com  Wed Feb 18 09:39:28 2015
From: mistersheik at gmail.com (Neil Girdhar)
Date: Wed, 18 Feb 2015 03:39:28 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <87lhjvhnf6.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
 <f5693415-0e68-4212-8d03-f07e2a53d65a@googlegroups.com>
 <mbrbet$4es$1@ger.gmane.org>
 <CAGAVQTGxGWpHfp-p22=1+MiukbHsoRxjBNVfr08JbzabKSfV4A@mail.gmail.com>
 <CAGAVQTEPfYUmVw1L8EMSE8BXEyydcxJp790yFjhasgpgx5rUBw@mail.gmail.com>
 <CAGAVQTGEvVL43VG5zEhuRK+1YzcmLovTN4n64tRS-DYa35VxuA@mail.gmail.com>
 <87lhjvhnf6.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <CAA68w_mNoR0EzLtKeHn1QE6XpW0NE7YU5P4aBTV8c=ntRQRrUQ@mail.gmail.com>

On Tue, Feb 17, 2015 at 11:08 PM, Stephen J. Turnbull <stephen at xemacs.org>
wrote:

> C Anthony Risinger writes:
>
>  > I'm not versed enough in the math behind it to know if it's expected or
>  > not, but as it stands, to remain compatible with sets, `d1 | d2` should
>  > behave like it does in my code (prefer the first, not the last).  I
> kinda
>  > like this, because it makes dict.__or__ a *companion* to .update(), not
> a
>  > replacement (since update prefers the last).
>
> But this is exactly the opposite of what the people who advocate use
> of an operator want.  As far as I can see, all of them want update
> semantics, because that's the more common use case where the current
> idioms feel burdensome.
>
>
yes, +1

also, if this goes through it should be added to collections.abc.Mapping
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150218/47ce0dbe/attachment.html>

From lkb.teichmann at gmail.com  Wed Feb 18 09:50:43 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Wed, 18 Feb 2015 09:50:43 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CADiSq7dQAEfGiUY8wznd+p5KxBn6Fm3MXFTjDAddFvk55iUaGQ@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>
 <CADiSq7dQAEfGiUY8wznd+p5KxBn6Fm3MXFTjDAddFvk55iUaGQ@mail.gmail.com>
Message-ID: <CAK9R32R-PGho__+pfK34nhiUh4x-qMk7dYSYAjB2vskS=izmMQ@mail.gmail.com>

> Zope already implements __classinit__ via a metaclass, so I'm not clear on
> how this addition would help with a backport of the __init_class__ spelling
> to Python 2.

Well, there is just a small issue in the zope implementation. They don't
support super(). So you cannot write:

    from ExtensionClass import Base
    class Spam(Base):
        def __class_init__(self):
            super().__class_init__(self)  # this line raises RuntimeError
            # do something here

The precise error is: RuntimeError: super(): empty __class__ cell

My pure python implementation with __init_subclass__ support
super() without problems.

The problem is that at the point they call __class_init__ (at the end
of ExtensionClass.__new__) the __class__ in the methods is not
initialized yet. Interestingly, it also doesn't help to move that call
into __init__, which is still a mystery to me, I was under the
assumtion that at the point of __init__ the class should be fully
initialized. Interestingly, type manages to set the __class__ of the
methods AFTER __init__ has finished. How it manages to do so
I don't know, but probably that's special-cased in C.

> I agree a clean idiom for detecting whether you're in the base class or not
> might be useful, but even without specific support in the PEP you'll at
> least be able to do that either by inspecting the MRO, or else by assuming
> that the class name not being bound yet means you're still in the process of
> creating it.

That sounds very complicated to me. Could you please give me an
example how I am supposed to do that?

Greetings

Martin

From ron3200 at gmail.com  Wed Feb 18 16:32:11 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Wed, 18 Feb 2015 10:32:11 -0500
Subject: [Python-ideas] Adding "+" and "+=" operators to dict
In-Reply-To: <CAGAVQTGoQko5XW8Sw26EhEv2T7eaRdXwiK+b6vBPd5ki0U=19Q@mail.gmail.com>
References: <CA+GyjMmGw-Y2FeANG8XHifcbfY8-_0ggpv6zLD5OP=5DmFbP4A@mail.gmail.com>
 <3A2104E9-9397-464C-BDAA-723D047956B1@stufft.io>
 <54DBF60F.3000600@stoneleaf.us>
 <CANc-5Uwwus73QF-nW-nD1TTTQQ7xdAr=TF4pjLK-iXrsP+r8iw@mail.gmail.com>
 <CAPTjJmonmHqnFBVtaSsgSP-GW=GmYhhR2HGbp+c6xYBxF6p3tA@mail.gmail.com>
 <54DC698A.2090909@egenix.com> <20150212085939.GO24753@tonks>
 <CA+GyjM=5=fgotsHzcRc5R71GLALE70=UHpvA-K+anOhQuaBsaw@mail.gmail.com>
 <20150212104310.GA2498@ando.pearwood.info>
 <253D2851-CEE1-46F2-8F9B-D0D86C8BDCC4@stufft.io>
 <f5693415-0e68-4212-8d03-f07e2a53d65a@googlegroups.com>
 <mbrbet$4es$1@ger.gmane.org>
 <CAGAVQTGxGWpHfp-p22=1+MiukbHsoRxjBNVfr08JbzabKSfV4A@mail.gmail.com>
 <CAGAVQTEPfYUmVw1L8EMSE8BXEyydcxJp790yFjhasgpgx5rUBw@mail.gmail.com>
 <CAGAVQTGEvVL43VG5zEhuRK+1YzcmLovTN4n64tRS-DYa35VxuA@mail.gmail.com>
 <87lhjvhnf6.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAGAVQTGoQko5XW8Sw26EhEv2T7eaRdXwiK+b6vBPd5ki0U=19Q@mail.gmail.com>
Message-ID: <mc2b9s$6o5$1@ger.gmane.org>



On 02/17/2015 11:50 PM, C Anthony Risinger wrote:
> On Tue, Feb 17, 2015 at 10:08 PM, Stephen J. Turnbull
> <stephen at xemacs.org
> <mailto:stephen at xemacs.org>> wrote:
>
>     C Anthony Risinger writes:
>
>       > I'm not versed enough in the math behind it to know if it's expected or
>       > not, but as it stands, to remain compatible with sets, `d1 | d2` should
>       > behave like it does in my code (prefer the first, not the last).  I
>     kinda
>       > like this, because it makes dict.__or__ a *companion* to .update(),
>     not a
>       > replacement (since update prefers the last).
>
>     But this is exactly the opposite of what the people who advocate use
>     of an operator want.  As far as I can see, all of them want update
>     semantics, because that's the more common use case where the current
>     idioms feel burdensome.
>
>
> True... maybe that really is a good case for the + then, as something like
> .update().
>
> Personally, I think making dict be more set-like is way more
> interesting/useful, because of the *filtering* capabilities:

Maybe it would work better as a multi-dict where you can have more than one 
value for a key.   But I think it also is specialised enough that it may be 
better off on pypi.

Cheers,
    Ron














From bryan0731 at gmail.com  Wed Feb 18 16:59:00 2015
From: bryan0731 at gmail.com (Bryan Duarte)
Date: Wed, 18 Feb 2015 08:59:00 -0700
Subject: [Python-ideas] Accessible tools
Message-ID: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>

Hello group,

I posted a few days ago and did not get a response. Maybe this is not the correct forum for this question and if that is the case I apologize.

I am a blind software engineering student who is conducting research and would like to use Python for the developing language. I am in search of a tool or IDE for Python that is accessible with a screen reader. I have tried a number of tools but none were very accessible or accessible at all. I use a Mac mostly but can use Windows or Linux if need be. Any help would be great and again if this is the wrong list for this question I apologize. 

From me at the-compiler.org  Wed Feb 18 17:08:18 2015
From: me at the-compiler.org (Florian Bruhin)
Date: Wed, 18 Feb 2015 17:08:18 +0100
Subject: [Python-ideas] Accessible tools
In-Reply-To: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
Message-ID: <20150218160817.GL24753@tonks>

Hi,

* Bryan Duarte <bryan0731 at gmail.com> [2015-02-18 08:59:00 -0700]:
> I posted a few days ago and did not get a response. Maybe this is
> not the correct forum for this question and if that is the case I
> apologize.

python-ideas is mainly for new ideas and proposals for the Python
language itself. I think the python-list list (with generic python
topics) would probably be the better place.

> I am a blind software engineering student who is conducting research
> and would like to use Python for the developing language. I am in
> search of a tool or IDE for Python that is accessible with a screen
> reader. I have tried a number of tools but none were very accessible
> or accessible at all. I use a Mac mostly but can use Windows or
> Linux if need be. Any help would be great and again if this is the
> wrong list for this question I apologize.

I don't know much about accessibility, but I've heard about people who
find terminal applications a lot more accessible. Maybe something like
vim or emacs suits you more? Keep in mind you don't really need an IDE
to develop with python - any plain text editor works.

For emacs (I think), I've even heard about a tool (Emacspeak?) which
manages indendation using pitched sounds or voice. I guess that might
be useful for Python too.

In the stackoverflow question at [1] some people also recommend Visual
Studio, for which there are Python extensions.

[1] http://stackoverflow.com/questions/118984/how-can-you-program-if-youre-blind

Florian

-- 
http://www.the-compiler.org | me at the-compiler.org (Mail/XMPP)
   GPG: 916E B0C8 FD55 A072 | http://the-compiler.org/pubkey.asc
         I love long mails! | http://email.is-not-s.ms/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150218/fe829f29/attachment.sig>

From rosuav at gmail.com  Wed Feb 18 17:11:49 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Thu, 19 Feb 2015 03:11:49 +1100
Subject: [Python-ideas] Accessible tools
In-Reply-To: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
Message-ID: <CAPTjJmpPj7HKjic+=mL0EbFm16N=59fQwPE3AxEJ6dndGE-0-w@mail.gmail.com>

On Thu, Feb 19, 2015 at 2:59 AM, Bryan Duarte <bryan0731 at gmail.com> wrote:
> Hello group,
>
> I posted a few days ago and did not get a response. Maybe this is not the correct forum for this question and if that is the case I apologize.
>
> I am a blind software engineering student who is conducting research and would like to use Python for the developing language. I am in search of a tool or IDE for Python that is accessible with a screen reader. I have tried a number of tools but none were very accessible or accessible at all. I use a Mac mostly but can use Windows or Linux if need be. Any help would be great and again if this is the wrong list for this question I apologize.
>

You'll probably get better responses on the main list for the
discussion of the use of Python, as opposed to this list where where
discuss ideas for where Python itself should be heading in the future.
The main discussion list is python-list at python.org and
comp.lang.python (the newsgroup and mailing list gateway to each
other); if you don't get a response there, you may want to consider
other sites such as Stack Overflow.

All the best!

ChrisA

From Nikolaus at rath.org  Wed Feb 18 17:48:35 2015
From: Nikolaus at rath.org (Nikolaus Rath)
Date: Wed, 18 Feb 2015 08:48:35 -0800
Subject: [Python-ideas] Accessible tools
In-Reply-To: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com> (Bryan Duarte's
 message of "Wed, 18 Feb 2015 08:59:00 -0700")
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
Message-ID: <87wq3fi2sc.fsf@thinkpad.rath.org>

On Feb 18 2015, Bryan Duarte <bryan0731-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> wrote:
> Hello group,
>
> I posted a few days ago and did not get a response. Maybe this is not
> the correct forum for this question and if that is the case I
> apologize.

python-list would probably be a better choice,
cf. https://www.python.org/community/lists/.

> I am a blind software engineering student who is conducting research
> and would like to use Python for the developing language. I am in
> search of a tool or IDE for Python that is accessible with a screen
> reader. I have tried a number of tools but none were very accessible
> or accessible at all. I use a Mac mostly but can use Windows or Linux
> if need be.

Hmm. Do you really need an IDE to program in Python? Maybe you could try
Emacs with Emacs Code Browser and Emacspeak?


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             ?Time flies like an arrow, fruit flies like a Banana.?

From bryan0731 at gmail.com  Wed Feb 18 18:30:08 2015
From: bryan0731 at gmail.com (Bryan Duarte)
Date: Wed, 18 Feb 2015 10:30:08 -0700
Subject: [Python-ideas] Accessible tools
In-Reply-To: <87wq3fi2sc.fsf@thinkpad.rath.org>
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
 <87wq3fi2sc.fsf@thinkpad.rath.org>
Message-ID: <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com>

Hello all,

Thank you all for your feedback. I did respond but did not reply all so my message only went to Chris. 

I understand I do not need an IDE to program Python, I was only looking for an IDE for the tools it offers when exploring classes and methods. I am experimenting with different libraries and with an IDE or other tool I can quickly and easily see methods which can be called from different classes. If you are aware of how I can do this using VIM, eMacs or any other tool please let me know. I typically just use VIM or another text editor to write the majority of my code but in this case and some other cases it is more efficient to have these tools that make code development faster and easier. 

Here are the tools I have tried already but I will also be posting to the other group soon.
? pyCharm
? iPy
? iPY notebook
? Idle
? Eclipse
So far Eclipse worked the best but there were still some issues like it freezing when tool tips would pop up. I also use VIM and have my .vimrc modified to auto indent and execute some scripts like pIndent when called. Thanks again everyone for your help even though this is not the correct place for this question. 

p.s if this group handles Python tools by any chance it would be nice to have something like Idle made accessible. 
> On Feb 18, 2015, at 9:48 AM, Nikolaus Rath <Nikolaus at rath.org> wrote:
> 
> On Feb 18 2015, Bryan Duarte <bryan0731-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> wrote:
>> Hello group,
>> 
>> I posted a few days ago and did not get a response. Maybe this is not
>> the correct forum for this question and if that is the case I
>> apologize.
> 
> python-list would probably be a better choice,
> cf. https://www.python.org/community/lists/.
> 
>> I am a blind software engineering student who is conducting research
>> and would like to use Python for the developing language. I am in
>> search of a tool or IDE for Python that is accessible with a screen
>> reader. I have tried a number of tools but none were very accessible
>> or accessible at all. I use a Mac mostly but can use Windows or Linux
>> if need be.
> 
> Hmm. Do you really need an IDE to program in Python? Maybe you could try
> Emacs with Emacs Code Browser and Emacspeak?
> 
> 
> Best,
> -Nikolaus
> 
> -- 
> GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
> Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
> 
>             ?Time flies like an arrow, fruit flies like a Banana.?
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/


From tjreedy at udel.edu  Wed Feb 18 20:42:05 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 18 Feb 2015 14:42:05 -0500
Subject: [Python-ideas] Accessible tools
In-Reply-To: <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com>
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
 <87wq3fi2sc.fsf@thinkpad.rath.org>
 <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com>
Message-ID: <mc2puh$bur$1@ger.gmane.org>

On 2/18/2015 12:30 PM, Bryan Duarte wrote:

> Here are the tools I have tried already  ... Idle ...
> it would be nice to have something like Idle made accessible.

Idle uses tkinter, the interface to the tcl tk gui system.  I have read 
that tk is not designed for accessibility.  There is nothing we can do 
about that.  If our idlelib code makes matters worse, say with tooltips, 
we could try to improve it.  But I just do not know if there is anything 
we could do.

-- 
Terry Jan Reedy


From wes.turner at gmail.com  Wed Feb 18 20:56:17 2015
From: wes.turner at gmail.com (Wes Turner)
Date: Wed, 18 Feb 2015 13:56:17 -0600
Subject: [Python-ideas] Accessible tools
In-Reply-To: <mc2puh$bur$1@ger.gmane.org>
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
 <87wq3fi2sc.fsf@thinkpad.rath.org>
 <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com>
 <mc2puh$bur$1@ger.gmane.org>
Message-ID: <CACfEFw93hquE1=sgQKc5Pyk_jfCqjEa=w8M5nN9qjNp6cWMSHA@mail.gmail.com>

A few (possibly more aceessible) alternatives to IDLE:

* IPython in a terminal
* IPython qtconsole (Qt toolkit)
* IPython Notebook may / should have WAI ARIA role tags (which are really
easy to add to most templates)
* Spyder IDE (Qt toolkit) embeds an IPython console with a Run button and
lightweight code completion: https://code.google.com/p/spyderlib/
* I have no idea whether the python-mode Vim plugin is accessible:
https://github.com/klen/python-mode

* Here's my dot vim config: https://github.com/westurner/dotvim

It could be helpful to get a Wiki page together for accessibility: tools,
features, gaps, best practices, etc.
On Feb 18, 2015 1:43 PM, "Terry Reedy" <tjreedy at udel.edu> wrote:

> On 2/18/2015 12:30 PM, Bryan Duarte wrote:
>
>  Here are the tools I have tried already  ... Idle ...
>> it would be nice to have something like Idle made accessible.
>>
>
> Idle uses tkinter, the interface to the tcl tk gui system.  I have read
> that tk is not designed for accessibility.  There is nothing we can do
> about that.  If our idlelib code makes matters worse, say with tooltips, we
> could try to improve it.  But I just do not know if there is anything we
> could do.
>
> --
> Terry Jan Reedy
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150218/def4b3fa/attachment.html>

From abarnert at yahoo.com  Wed Feb 18 23:07:13 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Wed, 18 Feb 2015 14:07:13 -0800
Subject: [Python-ideas] Accessible tools
In-Reply-To: <mc2puh$bur$1@ger.gmane.org>
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
 <87wq3fi2sc.fsf@thinkpad.rath.org>
 <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com> <mc2puh$bur$1@ger.gmane.org>
Message-ID: <0E5FC025-6100-419E-AA56-B92143731DDC@yahoo.com>

On Feb 18, 2015, at 11:42, Terry Reedy <tjreedy at udel.edu> wrote:

> On 2/18/2015 12:30 PM, Bryan Duarte wrote:
> 
>> Here are the tools I have tried already  ... Idle ...
>> it would be nice to have something like Idle made accessible.
> 
> Idle uses tkinter, the interface to the tcl tk gui system.  I have read that tk is not designed for accessibility.  

IIRC, accessibility was one of the key items for Tcl/Tk 8.5, but nobody ended up championing it (it didn't even get a TIP, their equivalent of a PEP), so it got bumped to the unspecified future, where it's remained. So I'm guessing it has slim to no chance of happening before the next 8.5-style Tk revamp, and considering that it was 9 years between 8.0 (the last revamp) and 8.5 and another 5 years between 8.5 and 8.6, that could be a very long way off.



From stephen at xemacs.org  Thu Feb 19 01:50:57 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Thu, 19 Feb 2015 09:50:57 +0900
Subject: [Python-ideas] Accessible tools
In-Reply-To: <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com>
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
 <87wq3fi2sc.fsf@thinkpad.rath.org>
 <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com>
Message-ID: <87y4nug1vy.fsf@uwakimon.sk.tsukuba.ac.jp>

Bryan Duarte writes:

 > If you are aware of how I can [browse class definitions] this using
 > VIM, eMacs or any other tool please let me know.

Emacs has a subsystem called CEDET, which provides reasonably good but
not state-of-the-art implementations of many modern code analysis and
refactoring tools.  It includes a code browser (the "Emacs Code
Browser" mentioned earlier is actually the historical origin of CEDET.

The Emacspeak package (which is available separately) is an audible
output system *for Emacs*.  It doesn't just read the screen: it has
access to Emacs internals.  I don't personally have experience with it
but in theory that access could make a big improvement in the accuracy
of its output.

Emacs is a paleolithic design which has recently had GUI features
grafted on to it, but the Emacs maintainers continue to ensure that
its features work well in a terminal.  I think it would be your best
bet, because (as a non-VIM user) my guess is that VIM users mostly
prefer to use VIM as the editor in systems like Eclipse.


From abarnert at yahoo.com  Thu Feb 19 05:01:39 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Wed, 18 Feb 2015 20:01:39 -0800
Subject: [Python-ideas] Accessible tools
In-Reply-To: <87y4nug1vy.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
 <87wq3fi2sc.fsf@thinkpad.rath.org>
 <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com>
 <87y4nug1vy.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <CA87C783-F545-4137-9A4E-4432C74B98ED@yahoo.com>

On Feb 18, 2015, at 16:50, "Stephen J. Turnbull" <stephen at xemacs.org> wrote:

> Emacs is a paleolithic design which has recently had GUI features
> grafted on to it,

Not exactly recently; it goes back to version 19--as in 1992-ish for Lucid and Epoch, 1994 for GNU. In fact, many people think the reason it's GUI features are so strange is that they were grafted on too long ago, not too recently, meaning they're stuck with a weird paradigm that goes back to the days before the GUI wars.

> but the Emacs maintainers continue to ensure that
> its features work well in a terminal.  I think it would be your best
> bet, because (as a non-VIM user) my guess is that VIM users mostly
> prefer to use VIM as the editor in systems like Eclipse.

An awful lot of people use vim from the console. In fact, one of the reasons they claim it's better than emacs is that it's tiny enough to fit on embedded/mobile systems and starts up quickly enough to use for a quick one-line edit. Another is that its command sequences are designed to be typed without looking at the screen or moving your hands from normal touch-type position. And most of the people I know who like to use vim graphically still have the same anti-IDE mentality you'd expect from people who prefer vi, and use its built-in GUI.

The reason not to use vim is (or, rather, might be) that it doesn't have nearly as many customizations and extensions available, so there may not be any way to configure it the way the OP needs. With emacs, pretty much anything is possible; the only question is whether it's easy enough...

From opensource at ronnypfannschmidt.de  Thu Feb 19 08:16:33 2015
From: opensource at ronnypfannschmidt.de (Ronny Pfannschmidt)
Date: Thu, 19 Feb 2015 08:16:33 +0100
Subject: [Python-ideas] introspectable assertions
Message-ID: <60d14266-4fdc-462a-af61-e7edcb5469e0@ronnypfannschmidt.de>


Hi,

the idea is that if 

-> from __future__ import assertion_introspection

then 

-> assert foo.bar(baz) == abc.def(), msg
DetailedAssertionError: {msg}
 foo -> <foo...>
 baz -> <bar ...>
 foo.bar() -> <...>
 abc -> ...
 abc.def() -> ...
 foo.bar(baz) == abc.def() -> ...
 
it would help introspection and testing utilities

-- Ronny

From opensource at ronnypfannschmidt.de  Thu Feb 19 08:25:34 2015
From: opensource at ronnypfannschmidt.de (Ronny Pfannschmidt)
Date: Thu, 19 Feb 2015 08:25:34 +0100
Subject: [Python-ideas] hookable assertions - to support testing utilities
	and debugging
Message-ID: <ad715911-43e2-4953-9cd9-a1de46e7b9fc@ronnypfannschmidt.de>


Hi,

the idea is to have the assert statement call a global method,
just like import does

one could then do something like

with exception.collect_assertions:
 assert myobj.foo is bar
 assert myobj.baz == 123

on order to do groups of assertions having more debugging information on 
failed assertions

it could also enable more interesting context managers like 

with testtool.fail_lazy:
 ...  # this would fail the test but not stop its exeution in the middle

with testtools.informative_assert:
  ...  # this will get logged but the test run wont be considered failed

From rosuav at gmail.com  Thu Feb 19 08:31:10 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Thu, 19 Feb 2015 18:31:10 +1100
Subject: [Python-ideas] introspectable assertions
In-Reply-To: <60d14266-4fdc-462a-af61-e7edcb5469e0@ronnypfannschmidt.de>
References: <60d14266-4fdc-462a-af61-e7edcb5469e0@ronnypfannschmidt.de>
Message-ID: <CAPTjJmoKV8dF2_F5U3+82XpTPsae=RAV=rN23w5ABnJL2e1F_g@mail.gmail.com>

On Thu, Feb 19, 2015 at 6:16 PM, Ronny Pfannschmidt
<opensource at ronnypfannschmidt.de> wrote:
> Hi,
>
> the idea is that if
> -> from __future__ import assertion_introspection
>
> then
> -> assert foo.bar(baz) == abc.def(), msg
> DetailedAssertionError: {msg}
> foo -> <foo...>
> baz -> <bar ...>
> foo.bar() -> <...>
> abc -> ...
> abc.def() -> ...
> foo.bar(baz) == abc.def() -> ...
>
> it would help introspection and testing utilities
>
> -- Ronny

So what you want is something which does the equivalent of REXX's
"trace i" for a failing assertion. That can actually be done by
MacroPy, which I've recently been looking into for other reasons.
Check out its "Tracing" facilities:

https://github.com/lihaoyi/macropy#tracing

There's even an example of this exact feature - a variant of assert
called require, which traces its results.

I haven't actually used that part of MacroPy, but check it out,
chances are it does what you want!

ChrisA

From stephen at xemacs.org  Thu Feb 19 09:13:53 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Thu, 19 Feb 2015 17:13:53 +0900
Subject: [Python-ideas] Accessible tools
In-Reply-To: <CA87C783-F545-4137-9A4E-4432C74B98ED@yahoo.com>
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
 <87wq3fi2sc.fsf@thinkpad.rath.org>
 <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com>
 <87y4nug1vy.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CA87C783-F545-4137-9A4E-4432C74B98ED@yahoo.com>
Message-ID: <87vbiyfhdq.fsf@uwakimon.sk.tsukuba.ac.jp>

Andrew Barnert writes:
 > On Feb 18, 2015, at 16:50, "Stephen J. Turnbull" <stephen at xemacs.org> wrote:
 > 
 > > Emacs is a paleolithic design which has recently had GUI features
 > > grafted on to it,
 > 
 > Not exactly recently; it goes back to version 19--as in 1992-ish
 > for Lucid and Epoch, 1994 for GNU. In fact, many people think the
 > reason it's GUI features are so strange is that they were grafted

Right.  The paleolithic *design* didn't change, so Emacs is stuck with
a neolithic GUI.  XEmacs is a little bit better on both grounds, but
not much.

 > they're stuck with a weird paradigm that goes back to the days
 > before the GUI wars.

But we don't consider that we're "stuck"; we *like* text-oriented
interfaces<wink/>  (And they're probably easier for screen readers to
express effectively.)

 > An awful lot of people use vim from the console.

Of course they do, that's not the point.  The point is that because
VIM is *self-contained*, it won't have access to the level of
internals of the IDE (it's the IDE that's central here) that Emacspeak
has with CEDET and the Emacs redisplay (that latter is probably the
deciding factor).

 > Another is that its command sequences are designed to be typed
 > without looking at the screen or moving your hands from normal
 > touch-type position.

Now there's a biggee for this use-case!  Emacs does have an answer,
though: VIPER (vi keystroke bindings).

Whether all that adds up to enough potential to make Emacs use
palatable to someone who isn't already an Emacs user is the question I
guess.


From wes.turner at gmail.com  Thu Feb 19 10:20:14 2015
From: wes.turner at gmail.com (Wes Turner)
Date: Thu, 19 Feb 2015 03:20:14 -0600
Subject: [Python-ideas] Accessible tools
In-Reply-To: <CACfEFw9GvB5bV09FVyazKtO=LMo3ZEMotf8gDkSMW-uB0PWDiw@mail.gmail.com>
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
 <87wq3fi2sc.fsf@thinkpad.rath.org>
 <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com>
 <mc2puh$bur$1@ger.gmane.org>
 <CACfEFw93hquE1=sgQKc5Pyk_jfCqjEa=w8M5nN9qjNp6cWMSHA@mail.gmail.com>
 <CACfEFw9GvB5bV09FVyazKtO=LMo3ZEMotf8gDkSMW-uB0PWDiw@mail.gmail.com>
Message-ID: <CACfEFw8-hS9N2Y1QEhy5i35Kr4vEvuZ_NA-RRJzHkbdB-JEcCw@mail.gmail.com>

>From http://doc.qt.io/qt-5/accessible.html:

> [...] Semantic information about user interface elements, such as buttons
and scroll bars, is exposed to the assistive technologies. Qt supports
Microsoft Active Accessibility (MSAA) and IAccessible2 on Windows, OS X
Accessibility on OS X, and AT-SPI via DBus on Unix/X11. The platform
specific technologies are abstracted by Qt, so that applications do not
need any platform specific changes to work with the different native APIs.
Qt tries to make adding accessibility support to your application as easy
as possible, only a few changes from your side may be required to allow
even more users to enjoy it. [...]
On Feb 18, 2015 1:56 PM, "Wes Turner" <wes.turner at gmail.com> wrote:

A few (possibly more aceessible) alternatives to IDLE:

* IPython in a terminal
* IPython qtconsole (Qt toolkit)
* IPython Notebook may / should have WAI ARIA role tags (which are really
easy to add to most templates)
* Spyder IDE (Qt toolkit) embeds an IPython console with a Run button and
lightweight code completion: https://code.google.com/p/spyderlib/
* I have no idea whether the python-mode Vim plugin is accessible:
https://github.com/klen/python-mode

* Here's my dot vim config: https://github.com/westurner/dotvim

It could be helpful to get a Wiki page together for accessibility: tools,
features, gaps, best practices, etc.
On Feb 18, 2015 1:43 PM, "Terry Reedy" <tjreedy at udel.edu> wrote:

> On 2/18/2015 12:30 PM, Bryan Duarte wrote:
>
>  Here are the tools I have tried already  ... Idle ...
>> it would be nice to have something like Idle made accessible.
>>
>
> Idle uses tkinter, the interface to the tcl tk gui system.  I have read
> that tk is not designed for accessibility.  There is nothing we can do
> about that.  If our idlelib code makes matters worse, say with tooltips, we
> could try to improve it.  But I just do not know if there is anything we
> could do.
>
> --
> Terry Jan Reedy
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150219/5c74983f/attachment.html>

From steve at pearwood.info  Thu Feb 19 11:56:05 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Thu, 19 Feb 2015 21:56:05 +1100
Subject: [Python-ideas] introspectable assertions
In-Reply-To: <60d14266-4fdc-462a-af61-e7edcb5469e0@ronnypfannschmidt.de>
References: <60d14266-4fdc-462a-af61-e7edcb5469e0@ronnypfannschmidt.de>
Message-ID: <20150219105605.GC7655@ando.pearwood.info>

On Thu, Feb 19, 2015 at 08:16:33AM +0100, Ronny Pfannschmidt wrote:
> 
> Hi,
> 
> the idea is that if 
> 
> -> from __future__ import assertion_introspection
> 
> then 
> 
> -> assert foo.bar(baz) == abc.def(), msg
> DetailedAssertionError: {msg}
> foo -> <foo...>
> baz -> <bar ...>
> foo.bar() -> <...>
> abc -> ...
> abc.def() -> ...
> foo.bar(baz) == abc.def() -> ...
> 
> it would help introspection and testing utilities

I don't think we need a __future__ import, or to special case assert. 
Python already has hooks for customizing tracebacks, see the cgitb 
module. With it:

py> assert 23 == 24
AssertionError
Python 3.3.0rc3: /usr/local/bin/python3.3
Thu Feb 19 21:51:04 2015

A problem occurred in a Python script.  Here is the sequence of
function calls leading up to the error, in the order they occurred.

 /home/steve/<stdin> in <module>()

AssertionError:
    __cause__ = None
    __class__ = <class 'AssertionError'>
    __context__ = None
    __delattr__ = <method-wrapper '__delattr__' of AssertionError object>
    __dict__ = {}
    __dir__ = <built-in method __dir__ of AssertionError object>
    __doc__ = 'Assertion failed.'
    __eq__ = <method-wrapper '__eq__' of AssertionError object>
    __format__ = <built-in method __format__ of AssertionError object>
    __ge__ = <method-wrapper '__ge__' of AssertionError object>
    __getattribute__ = <method-wrapper '__getattribute__' of AssertionError object>
    __gt__ = <method-wrapper '__gt__' of AssertionError object>
    __hash__ = <method-wrapper '__hash__' of AssertionError object>
    __init__ = <method-wrapper '__init__' of AssertionError object>
    __le__ = <method-wrapper '__le__' of AssertionError object>
    __lt__ = <method-wrapper '__lt__' of AssertionError object>
    __ne__ = <method-wrapper '__ne__' of AssertionError object>
    __new__ = <built-in method __new__ of type object>
    __reduce__ = <built-in method __reduce__ of AssertionError object>
    __reduce_ex__ = <built-in method __reduce_ex__ of AssertionError object>
    __repr__ = <method-wrapper '__repr__' of AssertionError object>
    __setattr__ = <method-wrapper '__setattr__' of AssertionError object>
    __setstate__ = <built-in method __setstate__ of AssertionError object>
    __sizeof__ = <built-in method __sizeof__ of AssertionError object>
    __str__ = <method-wrapper '__str__' of AssertionError object>
    __subclasshook__ = <built-in method __subclasshook__ of type object>
    __suppress_context__ = False
    __traceback__ = <traceback object>
    args = ()
    with_traceback = <built-in method with_traceback of AssertionError object>

The above is a description of an error in a Python program.  Here is
the original traceback:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AssertionError



I don't know that I would actually use the cgitb module in production, 
it appears to display a lot of noise, but I think it is a good place to 
start.


-- 
Steve

From ncoghlan at gmail.com  Thu Feb 19 11:58:29 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 19 Feb 2015 20:58:29 +1000
Subject: [Python-ideas] Adding ziplib
In-Reply-To: <CAGE7PNKoonfetfHXdP1xiAk8LgQ_SRY=Mo6HnD0vWqd5F8LbkA@mail.gmail.com>
References: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>
 <0676A880-8750-443A-A953-932E965EDB12@yahoo.com>
 <CAO41-mMqBm3u+-Lu3RDBdxcW1R3FrPBCUeS6erRzPwMWCf8P2A@mail.gmail.com>
 <20150217042048.GZ24753@tonks>
 <CAO41-mNsxX3woGW4PHrBHjCkp2dO4E-6n-0Kyjt410Y_NZ=K1A@mail.gmail.com>
 <CAGE7PNKoonfetfHXdP1xiAk8LgQ_SRY=Mo6HnD0vWqd5F8LbkA@mail.gmail.com>
Message-ID: <CADiSq7dXk2RikyHDuvxdEcO-3Z0tDkoMwMnzOUeXGLJMYun=jQ@mail.gmail.com>

On 18 February 2015 at 07:17, Gregory P. Smith <greg at krypto.org> wrote:
> coming up with common API for these with the key features needed by all is
> interesting, doubly so for someone pedantic enough to get an implementation
> of each correct, but it should be done as a third party library and should
> ideally not use the stdlib zip or tar support at all. Do such libraries
> exist for other languages? Any C++ that could be reused? Is there such a
> thing as high quality code for this kind of task?

It's also worth remembering there's an existing higher level
procedural API in shutil for the cases where all you need is archive
creation and unpacking:
https://docs.python.org/3/library/shutil.html#archiving-operations

It doesn't help if you need to access or manipulate the internals of
an existing archive, though.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From steve at pearwood.info  Thu Feb 19 12:05:06 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Thu, 19 Feb 2015 22:05:06 +1100
Subject: [Python-ideas] hookable assertions - to support testing
	utilities and debugging
In-Reply-To: <ad715911-43e2-4953-9cd9-a1de46e7b9fc@ronnypfannschmidt.de>
References: <ad715911-43e2-4953-9cd9-a1de46e7b9fc@ronnypfannschmidt.de>
Message-ID: <20150219110505.GD7655@ando.pearwood.info>

On Thu, Feb 19, 2015 at 08:25:34AM +0100, Ronny Pfannschmidt wrote:
> 
> Hi,
> 
> the idea is to have the assert statement call a global method,
> just like import does
> 
> one could then do something like
[...]

I believe that overloading assert in this fashion is an abuse of assert. 
If we want custom "raise if not this condition" calls, I think we should 
copy unittest and define functions that do what we want, not abuse the 
assert statement.

If you want to (mis)use assert for unit and regression testing, you 
should see the nose third party module and see if that does the sort of 
thing you want.


-- 
Steve

From ncoghlan at gmail.com  Thu Feb 19 12:10:05 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 19 Feb 2015 21:10:05 +1000
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAK9R32R-PGho__+pfK34nhiUh4x-qMk7dYSYAjB2vskS=izmMQ@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>
 <CADiSq7dQAEfGiUY8wznd+p5KxBn6Fm3MXFTjDAddFvk55iUaGQ@mail.gmail.com>
 <CAK9R32R-PGho__+pfK34nhiUh4x-qMk7dYSYAjB2vskS=izmMQ@mail.gmail.com>
Message-ID: <CADiSq7f3qNJgS=C6A_-6Rt1LCC=mJQWG7w1Oj-uWCrdF=suqyA@mail.gmail.com>

On 18 Feb 2015 18:51, "Martin Teichmann" <lkb.teichmann at gmail.com> wrote:
>
> > Zope already implements __classinit__ via a metaclass, so I'm not clear
on
> > how this addition would help with a backport of the __init_class__
spelling
> > to Python 2.
>
> Well, there is just a small issue in the zope implementation. They don't
> support super(). So you cannot write:
>
>     from ExtensionClass import Base
>     class Spam(Base):
>         def __class_init__(self):
>             super().__class_init__(self)  # this line raises RuntimeError
>             # do something here
>
> The precise error is: RuntimeError: super(): empty __class__ cell
>
> My pure python implementation with __init_subclass__ support
> super() without problems.
>
> The problem is that at the point they call __class_init__ (at the end
> of ExtensionClass.__new__) the __class__ in the methods is not
> initialized yet. Interestingly, it also doesn't help to move that call
> into __init__, which is still a mystery to me, I was under the
> assumtion that at the point of __init__ the class should be fully
> initialized. Interestingly, type manages to set the __class__ of the
> methods AFTER __init__ has finished. How it manages to do so
> I don't know, but probably that's special-cased in C.

Yes, populating __class__ happens inside the interpreter only after the
class is fully constructed, so even though the *class* is fully initialised
at the end of __init__, a reference hasn't been inserted into the cell yet.
(The __class__ cell doesn't actually live on the class object - it's
created implicitly by the interpreter and then referenced from the
individual methods as a closure variable for each method that looks up
"__class__" or "super")

Python 2 doesn't provide the implicitly populated __class___ cell at all
though, so you have to refer to the class object by its name - zero
argument super is a Python 3 only feature.

> > I agree a clean idiom for detecting whether you're in the base class or
not
> > might be useful, but even without specific support in the PEP you'll at
> > least be able to do that either by inspecting the MRO, or else by
assuming
> > that the class name not being bound yet means you're still in the
process of
> > creating it.
>
> That sounds very complicated to me. Could you please give me an
> example how I am supposed to do that?

Second one is fairly straightforward - either do a check for the class name
in the module globals() or catching the resulting NameError.
Checking the MRO would be a bit more fragile, and likely not worth fiddling
with given the ability to check for the name binding of the base class
directly.

You also have the option of just setting an initialisation flag on the
class object itself (since the initial execution in the base class will
always be the first execution).

Cheers,
Nick.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150219/39e4917a/attachment.html>

From ncoghlan at gmail.com  Thu Feb 19 12:12:23 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 19 Feb 2015 21:12:23 +1000
Subject: [Python-ideas] hookable assertions - to support testing
 utilities and debugging
In-Reply-To: <20150219110505.GD7655@ando.pearwood.info>
References: <ad715911-43e2-4953-9cd9-a1de46e7b9fc@ronnypfannschmidt.de>
 <20150219110505.GD7655@ando.pearwood.info>
Message-ID: <CADiSq7ex3w9jH3YvGDv_cexKHMYUzOPoCtAxZpP85TJHjV5k_w@mail.gmail.com>

On 19 February 2015 at 21:05, Steven D'Aprano <steve at pearwood.info> wrote:
> On Thu, Feb 19, 2015 at 08:25:34AM +0100, Ronny Pfannschmidt wrote:
>>
>> Hi,
>>
>> the idea is to have the assert statement call a global method,
>> just like import does
>>
>> one could then do something like
> [...]
>
> I believe that overloading assert in this fashion is an abuse of assert.
> If we want custom "raise if not this condition" calls, I think we should
> copy unittest and define functions that do what we want, not abuse the
> assert statement.
>
> If you want to (mis)use assert for unit and regression testing, you
> should see the nose third party module and see if that does the sort of
> thing you want.

py.test is the one to look at for this:
http://pytest.org/latest/assert.html#assert-with-the-assert-statement

It (optionally) rewrites the AST for assert statements to make them
suitable for testing purposes rather than being optional runtime
integrity checks.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From opensource at ronnypfannschmidt.de  Thu Feb 19 12:16:31 2015
From: opensource at ronnypfannschmidt.de (Ronny Pfannschmidt)
Date: Thu, 19 Feb 2015 12:16:31 +0100
Subject: [Python-ideas]
	=?iso-8859-1?q?hookable_assertions_-_to_support_te?=
	=?iso-8859-1?q?sting_utilities_and_debugging?=
In-Reply-To: <CADiSq7ex3w9jH3YvGDv_cexKHMYUzOPoCtAxZpP85TJHjV5k_w@mail.gmail.com>
References: <ad715911-43e2-4953-9cd9-a1de46e7b9fc@ronnypfannschmidt.de>
 <20150219110505.GD7655@ando.pearwood.info>
 <CADiSq7ex3w9jH3YvGDv_cexKHMYUzOPoCtAxZpP85TJHjV5k_w@mail.gmail.com>
Message-ID: <b178c60f-a4c7-48fc-a707-32b7d51ba0a4@ronnypfannschmidt.de>


Im aware, my intent is to standardize the mechanism and push it into the 
language,

I'm already involved in the py.test development.

On Thursday, February 19, 2015 12:12:23 PM CEST, Nick Coghlan wrote:
> On 19 February 2015 at 21:05, Steven D'Aprano <steve at pearwood.info> wrote:
>> On Thu, Feb 19, 2015 at 08:25:34AM +0100, Ronny Pfannschmidt wrote: ...
>
> py.test is the one to look at for this:
> http://pytest.org/latest/assert.html#assert-with-the-assert-statement
>
> It (optionally) rewrites the AST for assert statements to make them
> suitable for testing purposes rather than being optional runtime
> integrity checks.
>
> Cheers,
> Nick.
>

From ncoghlan at gmail.com  Thu Feb 19 12:32:26 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 19 Feb 2015 21:32:26 +1000
Subject: [Python-ideas] hookable assertions - to support testing
 utilities and debugging
In-Reply-To: <b178c60f-a4c7-48fc-a707-32b7d51ba0a4@ronnypfannschmidt.de>
References: <ad715911-43e2-4953-9cd9-a1de46e7b9fc@ronnypfannschmidt.de>
 <20150219110505.GD7655@ando.pearwood.info>
 <CADiSq7ex3w9jH3YvGDv_cexKHMYUzOPoCtAxZpP85TJHjV5k_w@mail.gmail.com>
 <b178c60f-a4c7-48fc-a707-32b7d51ba0a4@ronnypfannschmidt.de>
Message-ID: <CADiSq7fNx8x+NmBoDSNQwk4=vchbpDXFGW-BO0QD0mjpBGvNyw@mail.gmail.com>

On 19 February 2015 at 21:16, Ronny Pfannschmidt
<opensource at ronnypfannschmidt.de> wrote:
>
> Im aware, my intent is to standardize the mechanism and push it into the
> language,

Unfortunately, you're faced with a lot of C programmers firmly of the
opinion that if you care about whether or not a piece of code gets
executed, you don't put it it in an assert statement. It's a giant
flashing red sign that says "This code is just checking something that
should always be true, you don't actually need to read it in order to
understand what this code is doing, but the assumption it documents
may be helpful if something is puzzling you".

So from my perspective (being one of the aforementioned C
programmers), I see this question as not so much about the assert
statement per se, but rather a more philosophical question:

* is testing code categorically different from production code?
* if yes, should assert statements be handled differently in testing code?
* if yes, how should that difference be communicated to readers and to
the interpreter?

(FWIW, I'm personally inclined to answer the first two questions with
"yes", but haven't got any ideas as far as the third one goes)

Making the behaviour of assert statements configurable attempts to
duck those questions instead of answering them, so I don't see at as a
particularly useful approach - it makes the language more complex
without providing additional clarity, rather than figuring out how to
encode a genuinely important categorical distinction between test code
and production code into the design of the language.

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From lkb.teichmann at gmail.com  Thu Feb 19 13:28:09 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Thu, 19 Feb 2015 13:28:09 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CADiSq7f3qNJgS=C6A_-6Rt1LCC=mJQWG7w1Oj-uWCrdF=suqyA@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>
 <CADiSq7dQAEfGiUY8wznd+p5KxBn6Fm3MXFTjDAddFvk55iUaGQ@mail.gmail.com>
 <CAK9R32R-PGho__+pfK34nhiUh4x-qMk7dYSYAjB2vskS=izmMQ@mail.gmail.com>
 <CADiSq7f3qNJgS=C6A_-6Rt1LCC=mJQWG7w1Oj-uWCrdF=suqyA@mail.gmail.com>
Message-ID: <CAK9R32TWoXpK0N7+q=8hif3v_bqo5+nsxO5XTURqNX38itE0og@mail.gmail.com>

> Yes, populating __class__ happens inside the interpreter only after the
> class is fully constructed, so even though the *class* is fully initialised
> at the end of __init__, a reference hasn't been inserted into the cell yet.
> (The __class__ cell doesn't actually live on the class object - it's created
> implicitly by the interpreter and then referenced from the individual
> methods as a closure variable for each method that looks up "__class__" or
> "super")
>
> Python 2 doesn't provide the implicitly populated __class___ cell at all
> though, so you have to refer to the class object by its name - zero argument
> super is a Python 3 only feature.

In short: PEP 422 in the current form is not back-portable.
With my modifications it is portable all the way down to python 2.7.

PEP 422 being back-portable => people will immediately start using it.
PEP 422 not back-portable => it won't be used for several years to come.

Greetings

Martin

From solipsis at pitrou.net  Thu Feb 19 13:54:06 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 19 Feb 2015 13:54:06 +0100
Subject: [Python-ideas] hookable assertions - to support testing
 utilities and debugging
References: <ad715911-43e2-4953-9cd9-a1de46e7b9fc@ronnypfannschmidt.de>
Message-ID: <20150219135406.3b965e07@fsol>

On Thu, 19 Feb 2015 08:25:34 +0100
Ronny Pfannschmidt
<opensource at ronnypfannschmidt.de> wrote:
> 
> Hi,
> 
> the idea is to have the assert statement call a global method,
> just like import does

How would that work? Does it give the AST to the global method?
Something else?

Regards

Antoine.



From donald at stufft.io  Thu Feb 19 13:55:48 2015
From: donald at stufft.io (Donald Stufft)
Date: Thu, 19 Feb 2015 07:55:48 -0500
Subject: [Python-ideas] hookable assertions - to support testing
	utilities and debugging
In-Reply-To: <CADiSq7fNx8x+NmBoDSNQwk4=vchbpDXFGW-BO0QD0mjpBGvNyw@mail.gmail.com>
References: <ad715911-43e2-4953-9cd9-a1de46e7b9fc@ronnypfannschmidt.de>
 <20150219110505.GD7655@ando.pearwood.info>
 <CADiSq7ex3w9jH3YvGDv_cexKHMYUzOPoCtAxZpP85TJHjV5k_w@mail.gmail.com>
 <b178c60f-a4c7-48fc-a707-32b7d51ba0a4@ronnypfannschmidt.de>
 <CADiSq7fNx8x+NmBoDSNQwk4=vchbpDXFGW-BO0QD0mjpBGvNyw@mail.gmail.com>
Message-ID: <7A8F6086-4703-484B-BFE8-6553665DF98C@stufft.io>


> On Feb 19, 2015, at 6:32 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> 
> On 19 February 2015 at 21:16, Ronny Pfannschmidt
> <opensource at ronnypfannschmidt.de> wrote:
>> 
>> Im aware, my intent is to standardize the mechanism and push it into the
>> language,
> 
> Unfortunately, you're faced with a lot of C programmers firmly of the
> opinion that if you care about whether or not a piece of code gets
> executed, you don't put it it in an assert statement. It's a giant
> flashing red sign that says "This code is just checking something that
> should always be true, you don't actually need to read it in order to
> understand what this code is doing, but the assumption it documents
> may be helpful if something is puzzling you".
> 
> So from my perspective (being one of the aforementioned C
> programmers), I see this question as not so much about the assert
> statement per se, but rather a more philosophical question:
> 
> * is testing code categorically different from production code?
> * if yes, should assert statements be handled differently in testing code?
> * if yes, how should that difference be communicated to readers and to
> the interpreter?
> 
> (FWIW, I'm personally inclined to answer the first two questions with
> "yes", but haven't got any ideas as far as the third one goes)
> 
> Making the behaviour of assert statements configurable attempts to
> duck those questions instead of answering them, so I don't see at as a
> particularly useful approach - it makes the language more complex
> without providing additional clarity, rather than figuring out how to
> encode a genuinely important categorical distinction between test code
> and production code into the design of the language.

Speaking as a not C programmer, I think the answers to the first two are
absolutely yes, and the third one I dunno, maybe some sort of top level
import like from __testing__ import real_assert or something.

I absolutely hate the unittest style of assertFoo, it?s a whole bunch of
methods that I can never remember the names of and I have to go look them
up. I think using the assert statement is completely logical for test code
and I believe you can see this by the fact all of the methods to do checks
in unittest are named ?assert something?. I think it introduces extra cognitive
burden for new users to have to use a whole bunch of additional methods to
do assertions in tests TBH. The way py.test works composes nicely with the
same kinds of expressions that you?re going to need to learn to write any
meaningful Python, but the assertFoo methods are special cases that don?t.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150219/6f118080/attachment.sig>

From opensource at ronnypfannschmidt.de  Thu Feb 19 14:41:21 2015
From: opensource at ronnypfannschmidt.de (Ronny Pfannschmidt)
Date: Thu, 19 Feb 2015 14:41:21 +0100
Subject: [Python-ideas]
	=?iso-8859-1?q?hookable_assertions_-_to_support_te?=
	=?iso-8859-1?q?sting_utilities_and_debugging?=
In-Reply-To: <20150219135406.3b965e07@fsol>
References: <ad715911-43e2-4953-9cd9-a1de46e7b9fc@ronnypfannschmidt.de>
 <20150219135406.3b965e07@fsol>
Message-ID: <4bbcdf99-58bc-4a99-9658-7ff991ef337a@ronnypfannschmidt.de>


the idea is to call the method with the assertionerror instance
so instead of directly doing a raise theerror,

python would call __raise_assertion__(theerror)

this composes nicely with the proposal for introspectable assertions

On Thursday, February 19, 2015 1:54:06 PM CEST, Antoine Pitrou wrote:
> On Thu, 19 Feb 2015 08:25:34 +0100
> Ronny Pfannschmidt
> <opensource at ronnypfannschmidt.de> wrote:
>> 
>> Hi,
>> 
>> the idea is to have the assert statement call a global method,
>> just like import does
>
> How would that work? Does it give the AST to the global method?
> Something else?
>
> Regards
>
> Antoine.
>
>

From skip.montanaro at gmail.com  Thu Feb 19 15:21:29 2015
From: skip.montanaro at gmail.com (Skip Montanaro)
Date: Thu, 19 Feb 2015 08:21:29 -0600
Subject: [Python-ideas] introspectable assertions
In-Reply-To: <20150219105605.GC7655@ando.pearwood.info>
References: <60d14266-4fdc-462a-af61-e7edcb5469e0@ronnypfannschmidt.de>
 <20150219105605.GC7655@ando.pearwood.info>
Message-ID: <CANc-5UwdCen7D4L0sxQe3t-sQEbJ5aT79F8EMNT6WoHMH8minQ@mail.gmail.com>

On Thu, Feb 19, 2015 at 4:56 AM, Steven D'Aprano <steve at pearwood.info> wrote:
> Python already has hooks for customizing tracebacks, see the cgitb
> module.

Thanks for that reminder. I had forgotten completely about it. In the
environment in which I work, we have an exception handler of last
resort which logs the traceback to our log file, then exits. This
would probably be a good place for me to log the return value of
cgitb.text(sys.exc_info()) instead. Sure, it will often dump a fair
amount of noise to the log file, but every once in awhile it will
probably include a clue which is currently lost.

Skip

From bryan0731 at gmail.com  Thu Feb 19 16:52:10 2015
From: bryan0731 at gmail.com (Bryan Duarte)
Date: Thu, 19 Feb 2015 08:52:10 -0700
Subject: [Python-ideas] Accessible tools
In-Reply-To: <CACfEFw8-hS9N2Y1QEhy5i35Kr4vEvuZ_NA-RRJzHkbdB-JEcCw@mail.gmail.com>
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
 <87wq3fi2sc.fsf@thinkpad.rath.org>
 <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com> <mc2puh$bur$1@ger.gmane.org>
 <CACfEFw93hquE1=sgQKc5Pyk_jfCqjEa=w8M5nN9qjNp6cWMSHA@mail.gmail.com>
 <CACfEFw9GvB5bV09FVyazKtO=LMo3ZEMotf8gDkSMW-uB0PWDiw@mail.gmail.com>
 <CACfEFw8-hS9N2Y1QEhy5i35Kr4vEvuZ_NA-RRJzHkbdB-JEcCw@mail.gmail.com>
Message-ID: <A0402530-5359-4195-BF3E-66BE5FDA69E8@gmail.com>

Honestly I live in VIM in my terminal. I use VIM for everything and even have integrated a VIM plugin to my Eclipse IDE. The commands are easy, and customizable. VI and VIM were built off of "ed" which was the first line editor available on the Unix OS. For this fact it has great integration with the terminal and system utilities. I can write code, compile code, call in external files, save off versions, execute code, without ever living my terminal window. The question is not really if I can read, write, or access the code it is mostly about getting the assistance of the IDE when it comes to "auto complete", and whatever the name is for Python for "intellisense". I know there is the interpreter for quick references and tests but why should I have to use two programs to do what my sighted peers do in one? This is not a direct question this is just the problem I am exploring. Is there a tool out there that can do this for someone who developers with the assistance of a screen reader or is it something to be developed still? 

A professor and I were discussing the idea of creating a completely text based IDE for blind developers. Every block of code could be collapsed and expanded with high level information about each block being provided. Every module would be tabular in nature meaning the developer could quickly and easily scan through every block of code as if it were a line of code then hear what is contained in the block. Tools and features would obviously be audible in nature with verbosity customizable. 
> On Feb 19, 2015, at 2:20 AM, Wes Turner <wes.turner at gmail.com> wrote:
> 
> From http://doc.qt.io/qt-5/accessible.html <http://doc.qt.io/qt-5/accessible.html>:
> 
> > [...] Semantic information about user interface elements, such as buttons and scroll bars, is exposed to the assistive technologies. Qt supports Microsoft Active Accessibility (MSAA) and IAccessible2 on Windows, OS X Accessibility on OS X, and AT-SPI via DBus on Unix/X11. The platform specific technologies are abstracted by Qt, so that applications do not need any platform specific changes to work with the different native APIs. Qt tries to make adding accessibility support to your application as easy as possible, only a few changes from your side may be required to allow even more users to enjoy it. [...]
> 
> On Feb 18, 2015 1:56 PM, "Wes Turner" <wes.turner at gmail.com <mailto:wes.turner at gmail.com>> wrote:
> A few (possibly more aceessible) alternatives to IDLE:
> 
> * IPython in a terminal
> * IPython qtconsole (Qt toolkit)
> * IPython Notebook may / should have WAI ARIA role tags (which are really easy to add to most templates)
> * Spyder IDE (Qt toolkit) embeds an IPython console with a Run button and lightweight code completion: https://code.google.com/p/spyderlib/ <https://code.google.com/p/spyderlib/>
> * I have no idea whether the python-mode Vim plugin is accessible: https://github.com/klen/python-mode <https://github.com/klen/python-mode>
> * Here's my dot vim config: https://github.com/westurner/dotvim <https://github.com/westurner/dotvim>
> It could be helpful to get a Wiki page together for accessibility: tools, features, gaps, best practices, etc.
> 
> On Feb 18, 2015 1:43 PM, "Terry Reedy" <tjreedy at udel.edu <mailto:tjreedy at udel.edu>> wrote:
> On 2/18/2015 12:30 PM, Bryan Duarte wrote:
> 
> Here are the tools I have tried already  ... Idle ...
> it would be nice to have something like Idle made accessible.
> 
> Idle uses tkinter, the interface to the tcl tk gui system.  I have read that tk is not designed for accessibility.  There is nothing we can do about that.  If our idlelib code makes matters worse, say with tooltips, we could try to improve it.  But I just do not know if there is anything we could do.
> 
> -- 
> Terry Jan Reedy
> 
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org <mailto:Python-ideas at python.org>
> https://mail.python.org/mailman/listinfo/python-ideas <https://mail.python.org/mailman/listinfo/python-ideas>
> Code of Conduct: http://python.org/psf/codeofconduct/ <http://python.org/psf/codeofconduct/>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150219/06a4a0da/attachment-0001.html>

From Nikolaus at rath.org  Thu Feb 19 17:53:22 2015
From: Nikolaus at rath.org (Nikolaus Rath)
Date: Thu, 19 Feb 2015 08:53:22 -0800
Subject: [Python-ideas] Accessible tools
In-Reply-To: <A0402530-5359-4195-BF3E-66BE5FDA69E8@gmail.com> (Bryan Duarte's
 message of "Thu, 19 Feb 2015 08:52:10 -0700")
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
 <87wq3fi2sc.fsf@thinkpad.rath.org>
 <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com>
 <mc2puh$bur$1@ger.gmane.org>
 <CACfEFw93hquE1=sgQKc5Pyk_jfCqjEa=w8M5nN9qjNp6cWMSHA@mail.gmail.com>
 <CACfEFw9GvB5bV09FVyazKtO=LMo3ZEMotf8gDkSMW-uB0PWDiw@mail.gmail.com>
 <CACfEFw8-hS9N2Y1QEhy5i35Kr4vEvuZ_NA-RRJzHkbdB-JEcCw@mail.gmail.com>
 <A0402530-5359-4195-BF3E-66BE5FDA69E8@gmail.com>
Message-ID: <87y4ntetbx.fsf@thinkpad.rath.org>

On Feb 19 2015, Bryan Duarte <bryan0731-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> wrote:
> Honestly I live in VIM in my terminal. I use VIM for everything and
> even have integrated a VIM plugin to my Eclipse IDE. The commands are
> easy, and customizable. VI and VIM were built off of "ed" which was
> the first line editor available on the Unix OS. For this fact it has
> great integration with the terminal and system utilities. I can write
> code, compile code, call in external files, save off versions, execute
> code, without ever living my terminal window. The question is not
> really if I can read, write, or access the code it is mostly about
> getting the assistance of the IDE when it comes to "auto complete",
> and whatever the name is for Python for "intellisense". I know there
> is the interpreter for quick references and tests but why should I
> have to use two programs to do what my sighted peers do in one? This
> is not a direct question this is just the problem I am exploring. Is
> there a tool out there that can do this for someone who developers
> with the assistance of a screen reader or is it something to be
> developed still?

Emacs with jedi and auto-complete.el offers you that.

However, you need to keep in mind that auto-complete for a dynamic
language is very difficult. Suppose you have a function defined as
'def foo(bar): ...' - how is your editor supposed to know what the 'bar'
parameter is? It therefore needs to scan your entire codebase for calls
to foo(), and then backtrack to try to deduce what *bar* might be. This
sometimes works, and sometimes it doesn't. Therefore, auto completion
for Python is typically much less useful than it is for other languages
(and you might be disappointed by it).

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             ?Time flies like an arrow, fruit flies like a Banana.?

From bryan0731 at gmail.com  Thu Feb 19 18:11:59 2015
From: bryan0731 at gmail.com (Bryan Duarte)
Date: Thu, 19 Feb 2015 10:11:59 -0700
Subject: [Python-ideas] Accessible tools
In-Reply-To: <87y4ntetbx.fsf@thinkpad.rath.org>
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
 <87wq3fi2sc.fsf@thinkpad.rath.org>
 <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com> <mc2puh$bur$1@ger.gmane.org>
 <CACfEFw93hquE1=sgQKc5Pyk_jfCqjEa=w8M5nN9qjNp6cWMSHA@mail.gmail.com>
 <CACfEFw9GvB5bV09FVyazKtO=LMo3ZEMotf8gDkSMW-uB0PWDiw@mail.gmail.com>
 <CACfEFw8-hS9N2Y1QEhy5i35Kr4vEvuZ_NA-RRJzHkbdB-JEcCw@mail.gmail.com>
 <A0402530-5359-4195-BF3E-66BE5FDA69E8@gmail.com>
 <87y4ntetbx.fsf@thinkpad.rath.org>
Message-ID: <22A7AFA3-E915-467D-A543-2E8D4CA50E3B@gmail.com>

Thank you Nikolaus,

I will do some reading on this. I am not an emacs user but I could explore this option for developing with Python. I wonder if there is something similar for VIM? I have a few scripts set up in my .vimrc which allow me to use a simple key command to execute things like pIndent and to help convert from 2.7 to 3.2 python code. Maybe some kind of script or plug in is available for VIM as well. I will look in to this. Thanks again
> On Feb 19, 2015, at 9:53 AM, Nikolaus Rath <Nikolaus at rath.org> wrote:
> 
> Nikolaus

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150219/c97d679e/attachment.html>

From g.brandl at gmx.net  Thu Feb 19 19:49:02 2015
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 19 Feb 2015 19:49:02 +0100
Subject: [Python-ideas] hookable assertions - to support testing
	utilities and debugging
In-Reply-To: <7A8F6086-4703-484B-BFE8-6553665DF98C@stufft.io>
References: <ad715911-43e2-4953-9cd9-a1de46e7b9fc@ronnypfannschmidt.de>
 <20150219110505.GD7655@ando.pearwood.info>
 <CADiSq7ex3w9jH3YvGDv_cexKHMYUzOPoCtAxZpP85TJHjV5k_w@mail.gmail.com>
 <b178c60f-a4c7-48fc-a707-32b7d51ba0a4@ronnypfannschmidt.de>
 <CADiSq7fNx8x+NmBoDSNQwk4=vchbpDXFGW-BO0QD0mjpBGvNyw@mail.gmail.com>
 <7A8F6086-4703-484B-BFE8-6553665DF98C@stufft.io>
Message-ID: <mc5b6u$n5c$1@ger.gmane.org>

On 02/19/2015 01:55 PM, Donald Stufft wrote:

>> So from my perspective (being one of the aforementioned C
>> programmers), I see this question as not so much about the assert
>> statement per se, but rather a more philosophical question:
>> 
>> * is testing code categorically different from production code?
>> * if yes, should assert statements be handled differently in testing code?
>> * if yes, how should that difference be communicated to readers and to
>> the interpreter?
>> 
>> (FWIW, I'm personally inclined to answer the first two questions with
>> "yes", but haven't got any ideas as far as the third one goes)
>> 
>> Making the behaviour of assert statements configurable attempts to
>> duck those questions instead of answering them, so I don't see at as a
>> particularly useful approach - it makes the language more complex
>> without providing additional clarity, rather than figuring out how to
>> encode a genuinely important categorical distinction between test code
>> and production code into the design of the language.
> 
> Speaking as a not C programmer, I think the answers to the first two are
> absolutely yes, and the third one I dunno, maybe some sort of top level
> import like from __testing__ import real_assert or something.
> 
> I absolutely hate the unittest style of assertFoo, it?s a whole bunch of
> methods that I can never remember the names of and I have to go look them
> up. I think using the assert statement is completely logical for test code
> and I believe you can see this by the fact all of the methods to do checks
> in unittest are named ?assert something?. I think it introduces extra cognitive
> burden for new users to have to use a whole bunch of additional methods to
> do assertions in tests TBH. The way py.test works composes nicely with the
> same kinds of expressions that you?re going to need to learn to write any
> meaningful Python, but the assertFoo methods are special cases that don?t.

+1

Georg


From antony.lee at berkeley.edu  Thu Feb 19 20:39:36 2015
From: antony.lee at berkeley.edu (Antony Lee)
Date: Thu, 19 Feb 2015 11:39:36 -0800
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAK9R32TWoXpK0N7+q=8hif3v_bqo5+nsxO5XTURqNX38itE0og@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>
 <CADiSq7dQAEfGiUY8wznd+p5KxBn6Fm3MXFTjDAddFvk55iUaGQ@mail.gmail.com>
 <CAK9R32R-PGho__+pfK34nhiUh4x-qMk7dYSYAjB2vskS=izmMQ@mail.gmail.com>
 <CADiSq7f3qNJgS=C6A_-6Rt1LCC=mJQWG7w1Oj-uWCrdF=suqyA@mail.gmail.com>
 <CAK9R32TWoXpK0N7+q=8hif3v_bqo5+nsxO5XTURqNX38itE0og@mail.gmail.com>
Message-ID: <CAGRr6BHZAheFro3sP3EPsNk1+NP8=2DOpyTAepiH5ZQGb96LwQ@mail.gmail.com>

In practice, this can be worked around using a method on the metaclass,
defined as follows (basically following Nick's idea):

    def _super_init(cls, name):
        super_cls = super(sys._getframe(1).f_globals.get(name, cls), cls)
        try:
            init_class = super_cls.__init_class__
        except AttributeError:
            pass
        else:
            init_class()

Now call cls._super_init() whenever you wanted to call super().__init__().

Antony

2015-02-19 4:28 GMT-08:00 Martin Teichmann <lkb.teichmann at gmail.com>:

> > Yes, populating __class__ happens inside the interpreter only after the
> > class is fully constructed, so even though the *class* is fully
> initialised
> > at the end of __init__, a reference hasn't been inserted into the cell
> yet.
> > (The __class__ cell doesn't actually live on the class object - it's
> created
> > implicitly by the interpreter and then referenced from the individual
> > methods as a closure variable for each method that looks up "__class__"
> or
> > "super")
> >
> > Python 2 doesn't provide the implicitly populated __class___ cell at all
> > though, so you have to refer to the class object by its name - zero
> argument
> > super is a Python 3 only feature.
>
> In short: PEP 422 in the current form is not back-portable.
> With my modifications it is portable all the way down to python 2.7.
>
> PEP 422 being back-portable => people will immediately start using it.
> PEP 422 not back-portable => it won't be used for several years to come.
>
> Greetings
>
> Martin
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150219/6d62a52a/attachment.html>

From barry at python.org  Thu Feb 19 22:04:54 2015
From: barry at python.org (Barry Warsaw)
Date: Thu, 19 Feb 2015 16:04:54 -0500
Subject: [Python-ideas] hookable assertions - to support testing
 utilities and debugging
References: <ad715911-43e2-4953-9cd9-a1de46e7b9fc@ronnypfannschmidt.de>
 <20150219110505.GD7655@ando.pearwood.info>
 <CADiSq7ex3w9jH3YvGDv_cexKHMYUzOPoCtAxZpP85TJHjV5k_w@mail.gmail.com>
 <b178c60f-a4c7-48fc-a707-32b7d51ba0a4@ronnypfannschmidt.de>
 <CADiSq7fNx8x+NmBoDSNQwk4=vchbpDXFGW-BO0QD0mjpBGvNyw@mail.gmail.com>
Message-ID: <20150219160454.0b98a9e4@anarchist.wooz.org>

On Feb 19, 2015, at 09:32 PM, Nick Coghlan wrote:

>So from my perspective (being one of the aforementioned C
>programmers), I see this question as not so much about the assert
>statement per se, but rather a more philosophical question:
>
>* is testing code categorically different from production code?
>* if yes, should assert statements be handled differently in testing code?
>* if yes, how should that difference be communicated to readers and to
>the interpreter?
>
>(FWIW, I'm personally inclined to answer the first two questions with
>"yes", but haven't got any ideas as far as the third one goes)

I'm of the same mind.

asserts in the code are almost like documentation.  When I write them I'm
usually thinking "this can't possibly happen, but maybe I forgot something".
That's very different from an assertion in test code which says "let's make
sure the outcome of this contrived call is what I expect it to be".

So to me, using the unittest.assert* methods makes that difference obvious,
though I'll admit that sometimes I'm in the same boat of having to dip into
the documentation to remember if it's assertMultilineEqual() or
assertMultiLineEqual().  I find most of the assert* methods fairly easy to
remember.

Cheers,
-Barry

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150219/b2be1162/attachment-0001.sig>

From ncoghlan at gmail.com  Thu Feb 19 22:35:11 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Fri, 20 Feb 2015 07:35:11 +1000
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAK9R32S6eM3zfAgSp1hnZLLgJ8aFcY8DPcfWWxX6f8jfB63gsw@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>
 <CADiSq7dQAEfGiUY8wznd+p5KxBn6Fm3MXFTjDAddFvk55iUaGQ@mail.gmail.com>
 <CAK9R32R-PGho__+pfK34nhiUh4x-qMk7dYSYAjB2vskS=izmMQ@mail.gmail.com>
 <CADiSq7f3qNJgS=C6A_-6Rt1LCC=mJQWG7w1Oj-uWCrdF=suqyA@mail.gmail.com>
 <CAK9R32S6eM3zfAgSp1hnZLLgJ8aFcY8DPcfWWxX6f8jfB63gsw@mail.gmail.com>
Message-ID: <CADiSq7fCU0xN68aFA+2FEkprx1wE6vy7uUz9o85h+nq3udp-jg@mail.gmail.com>

On 19 February 2015 at 22:22, Martin Teichmann
<martin.teichmann at gmail.com> wrote:
>> Yes, populating __class__ happens inside the interpreter only after the
>> class is fully constructed, so even though the *class* is fully initialised
>> at the end of __init__, a reference hasn't been inserted into the cell yet.
>> (The __class__ cell doesn't actually live on the class object - it's created
>> implicitly by the interpreter and then referenced from the individual
>> methods as a closure variable for each method that looks up "__class__" or
>> "super")
>>
>> Python 2 doesn't provide the implicitly populated __class___ cell at all
>> though, so you have to refer to the class object by its name - zero argument
>> super is a Python 3 only feature.
>
> In short: PEP 422 in the current form is not back-portable.
> With my modifications it is portable all the way down to python 2.7.
>
> PEP 422 being back-portable => people will immediately start using it.
> PEP 422 not back-portable => it won't be used for several years to come.

You haven't really explained why you think it's not backportable.
You've suggested a backport may require some workarounds in order to
handle cooperative multiple inheritance, but that's not even close to
being the same thing as not being able to backported at all.

That said, if you'd like to write a competing proposal for the
community's consideration, please go ahead (details on the process are
in PEP 1)

I'm certainly open to alternative ideas in this space, and PEP 422 is
deferred specifically because I don't currently have time to explore
those ideas further myself.

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From wes.turner at gmail.com  Thu Feb 19 22:54:29 2015
From: wes.turner at gmail.com (Wes Turner)
Date: Thu, 19 Feb 2015 15:54:29 -0600
Subject: [Python-ideas] hookable assertions - to support testing
 utilities and debugging
In-Reply-To: <CADiSq7fNx8x+NmBoDSNQwk4=vchbpDXFGW-BO0QD0mjpBGvNyw@mail.gmail.com>
References: <ad715911-43e2-4953-9cd9-a1de46e7b9fc@ronnypfannschmidt.de>
 <20150219110505.GD7655@ando.pearwood.info>
 <CADiSq7ex3w9jH3YvGDv_cexKHMYUzOPoCtAxZpP85TJHjV5k_w@mail.gmail.com>
 <b178c60f-a4c7-48fc-a707-32b7d51ba0a4@ronnypfannschmidt.de>
 <CADiSq7fNx8x+NmBoDSNQwk4=vchbpDXFGW-BO0QD0mjpBGvNyw@mail.gmail.com>
Message-ID: <CACfEFw-taXBp+p=deDgzOj4HHmHTfH3FbfVy9SO1nDn-Xjd4NQ@mail.gmail.com>

On Thu, Feb 19, 2015 at 5:32 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
>
>
> * is testing code categorically different from production code?
> * if yes, should assert statements be handled differently in testing code?
>

assert statements are compiled out with -O (e.g. PYTHON_OPTIMIZE=2)

* https://docs.python.org/3/using/cmdline.html#cmdoption-OO

* self.assert statements are not compiled out
* nose decorated test cases are not compiled out



> * if yes, how should that difference be communicated to readers and to
> the interpreter?
>

It's such a frequent pattern. ([declarative] "literate programming" (with
objects))

* Is there a better tool than tests for teaching?

* https://github.com/taavi/ipython_nose adds an IPython magic command
(%nose)
  that collects test functions
  and test output w/ IPYNB JSON (-> HTML, PDF, XML) output
  (e.g. for use with https://pypi.python.org/pypi/runipy)

* Justifying multiple failures and assertions per test:
  "Delayed assert / multiple failures per test"
http://pythontesting.net/strategy/delayed-assert/

* (collecting / emitting / yielding (s, p, o) tuples from nested dicts with
inferred #URI/paths)

* The JSON-LD test suite uses EARL, an RDF schema
(instead of xUnit XML)

  * http://json-ld.org/test-suite/
  * http://json-ld.org/test-suite/reports/
  * http://www.w3.org/TR/EARL10-Schema/
    "Evaluation and Report Language (EARL) 1.0 Schema" (2011)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150219/5c2d0d7e/attachment.html>

From stephen at xemacs.org  Fri Feb 20 01:53:09 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Fri, 20 Feb 2015 09:53:09 +0900
Subject: [Python-ideas] Accessible tools
In-Reply-To: <A0402530-5359-4195-BF3E-66BE5FDA69E8@gmail.com>
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
 <87wq3fi2sc.fsf@thinkpad.rath.org>
 <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com>
 <mc2puh$bur$1@ger.gmane.org>
 <CACfEFw93hquE1=sgQKc5Pyk_jfCqjEa=w8M5nN9qjNp6cWMSHA@mail.gmail.com>
 <CACfEFw9GvB5bV09FVyazKtO=LMo3ZEMotf8gDkSMW-uB0PWDiw@mail.gmail.com>
 <CACfEFw8-hS9N2Y1QEhy5i35Kr4vEvuZ_NA-RRJzHkbdB-JEcCw@mail.gmail.com>
 <A0402530-5359-4195-BF3E-66BE5FDA69E8@gmail.com>
Message-ID: <87oaopfloq.fsf@uwakimon.sk.tsukuba.ac.jp>

Executive summary:

Emacs's underlying design is very close to what you're talking about
already, and many (most?) of the features are already in place.

Bryan Duarte writes:

 > A professor and I were discussing the idea of creating a completely
 > text based IDE for blind developers.

This "text-based IDE" has existed since the early 1980s.  Its name is
"Emacs."  T. V. Raman, an ACM Fellow, former IBM and current Google
senior researcher, is the author of Emacspeak, which is a an audible
interface to Emacs.  Emacsspeak goes back to the late 90s at the
latest, and Raman pops up on the Emacs devel lists every so often to
suggest or contribute improvements to Emacs itself that focus on the
UX for blind users (as he is, himself).

Emacs has most of the features you're talking about in some form
already.  I admit it is questionable whether the quality is up to the
standards of Eclipse or Xcode.  For example, Emacs's completion
feature is currently not based on the full AST and so doesn't come up
to "intellisense" standards.  However, Emacs developers are working on
that, and Emacs already does have enough syntactic information about
programs to provide "not-quite-intelli-sense".

Emacs has a generic text-property feature, which is used by its native
IDE, called "CEDET", to mark program block structure.  I don't know if
CEDET knows about Python for your purposes, but it is based on a
generic parser called "semantic", so it wouldn't be hard to teach it.
Emacs also has a separate Python-mode which may already have expand/
collapse functionality (I don't use it in any mode).  Emacspeak
provides a high quality audible "screen" from what I've heard from
Raman on mailing lists.  If you have a favorite editor that isn't
Emacs, such as vi, Emacs provides emulations of the keystrokes, often
quite faithful.  vi has several competing emulations, and there are
quite a few oddballs too.  (Recently somebody protested vigorously on
the Emacs lists against plans to remove the TPU emulation!!)

Emacs is not to everybody's taste.  But its underlying design is very
close to what you're talking about already, and most of the features
are already in place.  Please look at it.


From Nikolaus at rath.org  Fri Feb 20 04:11:14 2015
From: Nikolaus at rath.org (Nikolaus Rath)
Date: Thu, 19 Feb 2015 19:11:14 -0800
Subject: [Python-ideas] Accessible tools
In-Reply-To: <87oaopfloq.fsf@uwakimon.sk.tsukuba.ac.jp> (Stephen J. Turnbull's
 message of "Fri, 20 Feb 2015 09:53:09 +0900")
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
 <87wq3fi2sc.fsf@thinkpad.rath.org>
 <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com>
 <mc2puh$bur$1@ger.gmane.org>
 <CACfEFw93hquE1=sgQKc5Pyk_jfCqjEa=w8M5nN9qjNp6cWMSHA@mail.gmail.com>
 <CACfEFw9GvB5bV09FVyazKtO=LMo3ZEMotf8gDkSMW-uB0PWDiw@mail.gmail.com>
 <CACfEFw8-hS9N2Y1QEhy5i35Kr4vEvuZ_NA-RRJzHkbdB-JEcCw@mail.gmail.com>
 <A0402530-5359-4195-BF3E-66BE5FDA69E8@gmail.com>
 <87oaopfloq.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <871tllxood.fsf@vostro.rath.org>

On Feb 19 2015, "Stephen J. Turnbull" <stephen-Sn97VrDLz2sdnm+yROfE0A at public.gmane.org> wrote:
> Emacs has most of the features you're talking about in some form
> already.  I admit it is questionable whether the quality is up to the
> standards of Eclipse or Xcode.  For example, Emacs's completion
> feature is currently not based on the full AST and so doesn't come up
> to "intellisense" standards.

Are you sure? I believe that jedi effective uses the AST. It actually
parses the code in a subordinate Python interpreter and uses its
interspection capabilities.

> Emacs has a generic text-property feature, which is used by its native
> IDE, called "CEDET", to mark program block structure.  I don't know if
> CEDET knows about Python for your purposes,

It does.

> but it is based on a generic parser called "semantic", so it wouldn't
> be hard to teach it.

Well, not so sure about that. There's the slight hurdle of learning
ELisp first.

> Emacs also has a separate Python-mode which may already have expand/
> collapse functionality

It does.

> Emacs is not to everybody's taste.  But its underlying design is very
> close to what you're talking about already, and most of the features
> are already in place.  Please look at it.

Make sure to look at Emacs 24. All of the above can also be set up for
Emacs 23, but using 24 will save you a lot of time.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             ?Time flies like an arrow, fruit flies like a Banana.?

From wes.turner at gmail.com  Fri Feb 20 06:02:06 2015
From: wes.turner at gmail.com (Wes Turner)
Date: Thu, 19 Feb 2015 23:02:06 -0600
Subject: [Python-ideas] Accessible tools
In-Reply-To: <871tllxood.fsf@vostro.rath.org>
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
 <87wq3fi2sc.fsf@thinkpad.rath.org>
 <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com>
 <mc2puh$bur$1@ger.gmane.org>
 <CACfEFw93hquE1=sgQKc5Pyk_jfCqjEa=w8M5nN9qjNp6cWMSHA@mail.gmail.com>
 <CACfEFw9GvB5bV09FVyazKtO=LMo3ZEMotf8gDkSMW-uB0PWDiw@mail.gmail.com>
 <CACfEFw8-hS9N2Y1QEhy5i35Kr4vEvuZ_NA-RRJzHkbdB-JEcCw@mail.gmail.com>
 <A0402530-5359-4195-BF3E-66BE5FDA69E8@gmail.com>
 <87oaopfloq.fsf@uwakimon.sk.tsukuba.ac.jp>
 <871tllxood.fsf@vostro.rath.org>
Message-ID: <CACfEFw9EUPgy-OvsuVV-LuaGkkThZEiCa04p6ZMFf72jxq99QQ@mail.gmail.com>

On Feb 19, 2015 9:12 PM, "Nikolaus Rath" <Nikolaus at rath.org> wrote:
>
> On Feb 19 2015, "Stephen J. Turnbull" <
stephen-Sn97VrDLz2sdnm+yROfE0A at public.gmane.org> wrote:
> > Emacs has most of the features you're talking about in some form
> > already.  I admit it is questionable whether the quality is up to the
> > standards of Eclipse or Xcode.  For example, Emacs's completion
> > feature is currently not based on the full AST and so doesn't come up
> > to "intellisense" standards.
>
> Are you sure? I believe that jedi effective uses the AST. It actually
> parses the code in a subordinate Python interpreter and uses its
> interspection capabilities.

Jedi also parses docstrings.

There are a number of plugins for emacs and jedi: "(Jedi.el, elpy,
anaconda-mode, ycmd)" https://github.com/davidhalter/jedi/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150219/cce0dde3/attachment-0001.html>

From lkb.teichmann at gmail.com  Fri Feb 20 09:40:47 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Fri, 20 Feb 2015 09:40:47 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CADiSq7fCU0xN68aFA+2FEkprx1wE6vy7uUz9o85h+nq3udp-jg@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>
 <CADiSq7dQAEfGiUY8wznd+p5KxBn6Fm3MXFTjDAddFvk55iUaGQ@mail.gmail.com>
 <CAK9R32R-PGho__+pfK34nhiUh4x-qMk7dYSYAjB2vskS=izmMQ@mail.gmail.com>
 <CADiSq7f3qNJgS=C6A_-6Rt1LCC=mJQWG7w1Oj-uWCrdF=suqyA@mail.gmail.com>
 <CAK9R32S6eM3zfAgSp1hnZLLgJ8aFcY8DPcfWWxX6f8jfB63gsw@mail.gmail.com>
 <CADiSq7fCU0xN68aFA+2FEkprx1wE6vy7uUz9o85h+nq3udp-jg@mail.gmail.com>
Message-ID: <CAK9R32QrrCgON=jaQak9_OS_CJHHhik6WwjVr+SkgAw406hQyA@mail.gmail.com>

Hi,

>>> Yes, populating __class__ happens inside the interpreter only after the
>>> class is fully constructed, so even though the *class* is fully initialised
>>> at the end of __init__, a reference hasn't been inserted into the cell yet.
>>> (The __class__ cell doesn't actually live on the class object - it's created
>>> implicitly by the interpreter and then referenced from the individual
>>> methods as a closure variable for each method that looks up "__class__" or
>>> "super")
>>>
>>> Python 2 doesn't provide the implicitly populated __class___ cell at all
>>> though, so you have to refer to the class object by its name - zero argument
>>> super is a Python 3 only feature.
>>
>> In short: PEP 422 in the current form is not back-portable.
>> With my modifications it is portable all the way down to python 2.7.
>>
>> PEP 422 being back-portable => people will immediately start using it.
>> PEP 422 not back-portable => it won't be used for several years to come.
>
> You haven't really explained why you think it's not backportable.
> You've suggested a backport may require some workarounds in order to
> handle cooperative multiple inheritance, but that's not even close to
> being the same thing as not being able to backported at all.

Well, your explanation up there is actually very good at explaining why
it isn't back portable: at the time of metaclass.__init__ the __class__ has
not been initialized yet, so cooperative multiple inheritance will never
work. This holds even true for the two parameter version of super():
as the __class__ has not yet been bound to anything, there is no way
to access it from within __class_init__. You say that some workarounds
would have to be done to handle cooperative multiple inheritance, yet
given that there is no way to tamper with __class__ I cannot see how
such a workaround should look like.

I would love to be proven wrong on the above statements, as I
actually like PEP 422. But back-portability, in my opinion, is actually
very important. So, maybe someone out there manages to write a
nice (or even ugly) backport of PEP 422?

> That said, if you'd like to write a competing proposal for the
> community's consideration, please go ahead (details on the process are
> in PEP 1)

I'll write something.

Greetings

Martin

From me+python at ixokai.io  Fri Feb 20 11:11:40 2015
From: me+python at ixokai.io (Stephen Hansen)
Date: Fri, 20 Feb 2015 02:11:40 -0800
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CA+=+wqABN7z=GA+1dOF-pHFT5eGP6hRXkPfq_iTOq5d2Ry73tg@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RBN7e9H8AJ_3_SBRqB1aUpOtZWFWDK_wx2J3wB8=gh5g@mail.gmail.com>
 <CA+=+wqABN7z=GA+1dOF-pHFT5eGP6hRXkPfq_iTOq5d2Ry73tg@mail.gmail.com>
Message-ID: <CAM1gar7ZpmeXP2DZrUGqZy=bO6a9CMQ41Su-i5oOhPwxi9_kZw@mail.gmail.com>

>
> > I am trying to solve this problem. What I posted was just one simple
> > idea how to solve it. Another one is the idea I posted earlier, the
> > idea was to add metaclasses using __add__. People didn't like the
> > __add__, so I implemented that with a new method called merge,
> > it's here: https://github.com/tecki/cpython/commits/metaclass-merge
> >
> > Instead of yelling at me that my solutions are too complicated,
>
> Hello,
> I am very sorry that my post came across as yelling. I had no
> intention of that, though on re-reading it is rather harsh.
> I will try to be less confrontational in the future.
>


No. I'm sorry, but this needs to be responded to.

I'm a lurker, and find this discussion barely interesting at its best --
I've never had to combine metaclasses and no compelling case has been shown
to my mind to interest me in fixing a so-called problem.

That said: In no way, shape or form did Petr "yell" at you. In no way shape
or form was his response not an entirely technical argument made with
detail and consideration. Its wholly inappropriate to respond to a
criticism or critique with an accusation of "yelling" or implying your
point of view is being bullied when its simply someone showing disagreement.

No one yelled. No one was in any way even kind of abrasive or rude. Someone
disagreed and laid out why.

To come back on them with an accusation of yelling-- bullying, really-- is
inappropriate, at best. Please don't do it.

That said: I really don't think you've made a case for what this is
solving.

I've used metaclasses many times, and have never run into this issue. I
know there's some problems when Python is bridging with foreign object
models-- ie, QT-- but does this really solve it? Is it really needed, and
what's easier? To my reading of the thread previous, there were several
people thinking that metaclasses were nuanced and problematic, but not that
the problems you're trying to solve are fixable or even meaningful in
context. People saying "there are issues with metaclasses" do not create a
consensus to solve it by... this... thing... you're proposing is an their
consensus. I don't know what its supposed to answer.

All in all, this entire thread just makes me disinterested immediately as
"not a problem", personally. What's wrong that you're solving? You want to
combine metaclasses -- but that entire idea seems slightly nonsensical to
me because how do you know what any metaclass will do?

Meta-metaclasses sounds like a rabbit hole without a destination in mind.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150220/5bd828ed/attachment.html>

From encukou at gmail.com  Fri Feb 20 14:33:05 2015
From: encukou at gmail.com (Petr Viktorin)
Date: Fri, 20 Feb 2015 14:33:05 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAM1gar7ZpmeXP2DZrUGqZy=bO6a9CMQ41Su-i5oOhPwxi9_kZw@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RBN7e9H8AJ_3_SBRqB1aUpOtZWFWDK_wx2J3wB8=gh5g@mail.gmail.com>
 <CA+=+wqABN7z=GA+1dOF-pHFT5eGP6hRXkPfq_iTOq5d2Ry73tg@mail.gmail.com>
 <CAM1gar7ZpmeXP2DZrUGqZy=bO6a9CMQ41Su-i5oOhPwxi9_kZw@mail.gmail.com>
Message-ID: <CA+=+wqB1gjmZ9SMuRqwi1brWNpvaH=37hpfyN7A5QY9N5zB5ng@mail.gmail.com>

On Fri, Feb 20, 2015 at 11:11 AM, Stephen Hansen <me+python at ixokai.io> wrote:
>> > I am trying to solve this problem. What I posted was just one simple
>> > idea how to solve it. Another one is the idea I posted earlier, the
>> > idea was to add metaclasses using __add__. People didn't like the
>> > __add__, so I implemented that with a new method called merge,
>> > it's here: https://github.com/tecki/cpython/commits/metaclass-merge
>> >
>> > Instead of yelling at me that my solutions are too complicated,
>>
>> Hello,
>> I am very sorry that my post came across as yelling. I had no
>> intention of that, though on re-reading it is rather harsh.
>> I will try to be less confrontational in the future.
>
>
>
> No. I'm sorry, but this needs to be responded to.
>
> I'm a lurker, and find this discussion barely interesting at its best --
> I've never had to combine metaclasses and no compelling case has been shown
> to my mind to interest me in fixing a so-called problem.
>
> That said: In no way, shape or form did Petr "yell" at you. In no way shape
> or form was his response not an entirely technical argument made with detail
> and consideration. Its wholly inappropriate to respond to a criticism or
> critique with an accusation of "yelling" or implying your point of view is
> being bullied when its simply someone showing disagreement.

No, Martin had every right to call me out. I have offended, thus I was
rude. It was my mistake.
That said, I'm rude quite often ? when I misjudge someone's
sensitivity for rudeness, chances are I'm rude. It happens, and I am
honestly sorry for it, so I try to correct it.
You might also notice that there was nothing I lost by apologizing.
And I would have nothing to gain by questioning Martin's right to feel
offended.

> No one yelled. No one was in any way even kind of abrasive or rude. Someone
> disagreed and laid out why.

Nonsense. Rudeness is a spectrum; it is impossible to be absolutely
non-abrasive, just as like no code can be absolutely elegant.
I think that strictly adhering to some expected threshold of rudeness
in a list like this is harmful. It keeps some people out of the
conversation. It is much better to take individual people into
account. Being called out when something I write turns out to be
offensive is necessary for calibrating the proper level of rudeness.
I'd like to thank Martin for saying it directly, so I had a chance to
defuse a grudge promptly.

> To come back on them with an accusation of yelling-- bullying, really-- is
> inappropriate, at best. Please don't do it.
>
> That said: I really don't think you've made a case for what this is solving.
>
> I've used metaclasses many times, and have never run into this issue. I know
> there's some problems when Python is bridging with foreign object models--
> ie, QT-- but does this really solve it? Is it really needed, and what's
> easier? To my reading of the thread previous, there were several people
> thinking that metaclasses were nuanced and problematic, but not that the
> problems you're trying to solve are fixable or even meaningful in context.
> People saying "there are issues with metaclasses" do not create a consensus
> to solve it by... this... thing... you're proposing is an their consensus. I
> don't know what its supposed to answer.
>
> All in all, this entire thread just makes me disinterested immediately as
> "not a problem", personally. What's wrong that you're solving? You want to
> combine metaclasses -- but that entire idea seems slightly nonsensical to me
> because how do you know what any metaclass will do?
>
> Meta-metaclasses sounds like a rabbit hole without a destination in mind.

You might not have run into the problem at hand, but it clearly is a
problem for others. This thread is looking for a solution that would
not be unnecessarily complicated for people who don't care.

I think that at this point, the discussion is past the questions you
are asking. Most have been asked, and mostly answered, in different
words above. If you find one that hasn't, could you ask a more focused
question?

From steve at pearwood.info  Fri Feb 20 17:29:02 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Sat, 21 Feb 2015 03:29:02 +1100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CA+=+wqB1gjmZ9SMuRqwi1brWNpvaH=37hpfyN7A5QY9N5zB5ng@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RBN7e9H8AJ_3_SBRqB1aUpOtZWFWDK_wx2J3wB8=gh5g@mail.gmail.com>
 <CA+=+wqABN7z=GA+1dOF-pHFT5eGP6hRXkPfq_iTOq5d2Ry73tg@mail.gmail.com>
 <CAM1gar7ZpmeXP2DZrUGqZy=bO6a9CMQ41Su-i5oOhPwxi9_kZw@mail.gmail.com>
 <CA+=+wqB1gjmZ9SMuRqwi1brWNpvaH=37hpfyN7A5QY9N5zB5ng@mail.gmail.com>
Message-ID: <20150220162901.GF7655@ando.pearwood.info>

On Fri, Feb 20, 2015 at 02:33:05PM +0100, Petr Viktorin wrote:

> No, Martin had every right to call me out. I have offended, thus I was
> rude. It was my mistake.

One can give offense, or the other can take offense. There are people 
who take offense at the mere existence of homosexuals, or women who 
show their face in public, or black people wanting access to the same 
opportunities as white people. That does not mean that homosexuals, 
women or blacks have *given* offense, only that some people will *take* 
offense when it suits their alterior motives.

I do not believe you said anything to Martin that gives offense. I'm not 
even sure that Martin took offense. As I recall, he merely said he had 
been yelled at, which is a much milder comment: "Instead of yelling at 
me...". All the posts I've read have been quiet, respectful, and have 
treated Martin fairly even if they have been critical of his ideas.


> Being called out when something I write turns out to be
> offensive is necessary for calibrating the proper level of rudeness.

This is a community. Neither Martin nor you get to decide alone what the 
"proper level of rudeness" is.

We all should keep in mind the meaning of the word "offend", and 
don't mistake mere vigorous disagreement for offensive behaviour.

    Offend: To displease; to make angry; to affront.
    (Websters)

    An affront is a designed mark of disrespect, usually
    in the presence of others. An insult is a personal
    attack either by words or actions, designed to
    humiliate or degrade. An outrage is an act of extreme
    and violent insult or abuse. An affront piques and
    mortifies; an insult irritates and provokes; an
    outrage wounds and injures.
    (The Collaborative International Dictionary of English)



-- 
Steve

From stephen at xemacs.org  Sat Feb 21 07:37:40 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Sat, 21 Feb 2015 15:37:40 +0900
Subject: [Python-ideas] Accessible tools
In-Reply-To: <871tllxood.fsf@vostro.rath.org>
References: <D24E9030-1292-4EFB-8D0C-D410137A2956@gmail.com>
 <87wq3fi2sc.fsf@thinkpad.rath.org>
 <A26C2D65-54DD-4CB3-BA15-96F36BCB959A@gmail.com>
 <mc2puh$bur$1@ger.gmane.org>
 <CACfEFw93hquE1=sgQKc5Pyk_jfCqjEa=w8M5nN9qjNp6cWMSHA@mail.gmail.com>
 <CACfEFw9GvB5bV09FVyazKtO=LMo3ZEMotf8gDkSMW-uB0PWDiw@mail.gmail.com>
 <CACfEFw8-hS9N2Y1QEhy5i35Kr4vEvuZ_NA-RRJzHkbdB-JEcCw@mail.gmail.com>
 <A0402530-5359-4195-BF3E-66BE5FDA69E8@gmail.com>
 <87oaopfloq.fsf@uwakimon.sk.tsukuba.ac.jp>
 <871tllxood.fsf@vostro.rath.org>
Message-ID: <874mqfg47f.fsf@uwakimon.sk.tsukuba.ac.jp>

We should probably take this off-list, since the OP hasn't actually
shown interest yet.  Assuming there are any replies ;-), Reply-To set
to me, will summarize.

Nikolaus Rath writes:
 > On Feb 19 2015, "Stephen J. Turnbull" <stephen-Sn97VrDLz2sdnm+yROfE0A at public.gmane.org> wrote:

 > > Emacs has most of the features you're talking about in some form
 > > already.  I admit it is questionable whether the quality is up to the
 > > standards of Eclipse or Xcode.  For example, Emacs's completion
 > > feature is currently not based on the full AST and so doesn't come up
 > > to "intellisense" standards.
 > 
 > Are you sure? I believe that jedi effective uses the AST. It actually
 > parses the code in a subordinate Python interpreter and uses its
 > interspection capabilities.

I'm sure you're right about jedi.  I was speaking in general.  As
perhaps you are aware, emacs-devel recently had an intense flamewar
over using the ASTs from LLVM and GCC, and that proposal is currently
on hold (RMS is "studying the matter").

 > > but it is based on a generic parser called "semantic", so it wouldn't
 > > be hard to teach it.
 > 
 > Well, not so sure about that. There's the slight hurdle of learning
 > ELisp first.

I know Elisp (and have already volunteered some assistance), as does
Dr. Raman (whose availability I can't speak to, but I'm sure he'd
offer at least moral support).  The Emacs developers will surely be
helpful.  I believe the effort would be small enough to be justified.

 > Make sure to look at Emacs 24. All of the above can also be set up
 > for Emacs 23, but using 24 will save you a lot of time.

I would say go straight to trunk (which will be Emacs 25), which will
be in pretest in the near future.  By the time the new IDE is release
ready, Emacs 24 will be out of support (Emacs only supports the
current release, so that's less than a year away).


From ncoghlan at gmail.com  Sat Feb 21 14:45:09 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 21 Feb 2015 23:45:09 +1000
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAK9R32QrrCgON=jaQak9_OS_CJHHhik6WwjVr+SkgAw406hQyA@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>
 <CADiSq7dQAEfGiUY8wznd+p5KxBn6Fm3MXFTjDAddFvk55iUaGQ@mail.gmail.com>
 <CAK9R32R-PGho__+pfK34nhiUh4x-qMk7dYSYAjB2vskS=izmMQ@mail.gmail.com>
 <CADiSq7f3qNJgS=C6A_-6Rt1LCC=mJQWG7w1Oj-uWCrdF=suqyA@mail.gmail.com>
 <CAK9R32S6eM3zfAgSp1hnZLLgJ8aFcY8DPcfWWxX6f8jfB63gsw@mail.gmail.com>
 <CADiSq7fCU0xN68aFA+2FEkprx1wE6vy7uUz9o85h+nq3udp-jg@mail.gmail.com>
 <CAK9R32QrrCgON=jaQak9_OS_CJHHhik6WwjVr+SkgAw406hQyA@mail.gmail.com>
Message-ID: <CADiSq7dP7fgBV2S_uUJJJTGXEruxmXkGSe1LeSTRo_JryJz50Q@mail.gmail.com>

On 20 February 2015 at 18:40, Martin Teichmann <lkb.teichmann at gmail.com> wrote:
> Hi,
>
>>>> Yes, populating __class__ happens inside the interpreter only after the
>>>> class is fully constructed, so even though the *class* is fully initialised
>>>> at the end of __init__, a reference hasn't been inserted into the cell yet.
>>>> (The __class__ cell doesn't actually live on the class object - it's created
>>>> implicitly by the interpreter and then referenced from the individual
>>>> methods as a closure variable for each method that looks up "__class__" or
>>>> "super")
>>>>
>>>> Python 2 doesn't provide the implicitly populated __class___ cell at all
>>>> though, so you have to refer to the class object by its name - zero argument
>>>> super is a Python 3 only feature.
>>>
>>> In short: PEP 422 in the current form is not back-portable.
>>> With my modifications it is portable all the way down to python 2.7.
>>>
>>> PEP 422 being back-portable => people will immediately start using it.
>>> PEP 422 not back-portable => it won't be used for several years to come.
>>
>> You haven't really explained why you think it's not backportable.
>> You've suggested a backport may require some workarounds in order to
>> handle cooperative multiple inheritance, but that's not even close to
>> being the same thing as not being able to backported at all.
>
> Well, your explanation up there is actually very good at explaining why
> it isn't back portable: at the time of metaclass.__init__ the __class__ has
> not been initialized yet, so cooperative multiple inheritance will never
> work. This holds even true for the two parameter version of super():
> as the __class__ has not yet been bound to anything, there is no way
> to access it from within __class_init__. You say that some workarounds
> would have to be done to handle cooperative multiple inheritance, yet
> given that there is no way to tamper with __class__ I cannot see how
> such a workaround should look like.
>
> I would love to be proven wrong on the above statements, as I
> actually like PEP 422. But back-portability, in my opinion, is actually
> very important. So, maybe someone out there manages to write a
> nice (or even ugly) backport of PEP 422?

Backportability is at best a nice-to-have when it comes to designing
changes to core semantics, and it certainly doesn't justify additional
design complexity.

That's why its better to focus on making actual *usage* simpler, and
it may be that a __subclass_init__ hook is simpler overall than a
__class_init__. Certainly one point in its favour is that opting in to
running the subclass init on the base class with a decorator would be
relatively easy, while opting out of running __class_init__ is likely
to be a bit tricky.

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From ncoghlan at gmail.com  Sat Feb 21 14:56:11 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 21 Feb 2015 23:56:11 +1000
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CADiSq7dP7fgBV2S_uUJJJTGXEruxmXkGSe1LeSTRo_JryJz50Q@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>
 <CADiSq7dQAEfGiUY8wznd+p5KxBn6Fm3MXFTjDAddFvk55iUaGQ@mail.gmail.com>
 <CAK9R32R-PGho__+pfK34nhiUh4x-qMk7dYSYAjB2vskS=izmMQ@mail.gmail.com>
 <CADiSq7f3qNJgS=C6A_-6Rt1LCC=mJQWG7w1Oj-uWCrdF=suqyA@mail.gmail.com>
 <CAK9R32S6eM3zfAgSp1hnZLLgJ8aFcY8DPcfWWxX6f8jfB63gsw@mail.gmail.com>
 <CADiSq7fCU0xN68aFA+2FEkprx1wE6vy7uUz9o85h+nq3udp-jg@mail.gmail.com>
 <CAK9R32QrrCgON=jaQak9_OS_CJHHhik6WwjVr+SkgAw406hQyA@mail.gmail.com>
 <CADiSq7dP7fgBV2S_uUJJJTGXEruxmXkGSe1LeSTRo_JryJz50Q@mail.gmail.com>
Message-ID: <CADiSq7dfqOwr-APJuDePYOL6vCXRU=Ay0wfwQGgvmJ8efeqL0Q@mail.gmail.com>

On 21 February 2015 at 23:45, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On 20 February 2015 at 18:40, Martin Teichmann <lkb.teichmann at gmail.com> wrote:
>> Hi,
>>
>>>>> Yes, populating __class__ happens inside the interpreter only after the
>>>>> class is fully constructed, so even though the *class* is fully initialised
>>>>> at the end of __init__, a reference hasn't been inserted into the cell yet.
>>>>> (The __class__ cell doesn't actually live on the class object - it's created
>>>>> implicitly by the interpreter and then referenced from the individual
>>>>> methods as a closure variable for each method that looks up "__class__" or
>>>>> "super")
>>>>>
>>>>> Python 2 doesn't provide the implicitly populated __class___ cell at all
>>>>> though, so you have to refer to the class object by its name - zero argument
>>>>> super is a Python 3 only feature.
>>>>
>>>> In short: PEP 422 in the current form is not back-portable.
>>>> With my modifications it is portable all the way down to python 2.7.
>>>>
>>>> PEP 422 being back-portable => people will immediately start using it.
>>>> PEP 422 not back-portable => it won't be used for several years to come.
>>>
>>> You haven't really explained why you think it's not backportable.
>>> You've suggested a backport may require some workarounds in order to
>>> handle cooperative multiple inheritance, but that's not even close to
>>> being the same thing as not being able to backported at all.
>>
>> Well, your explanation up there is actually very good at explaining why
>> it isn't back portable: at the time of metaclass.__init__ the __class__ has
>> not been initialized yet, so cooperative multiple inheritance will never
>> work. This holds even true for the two parameter version of super():
>> as the __class__ has not yet been bound to anything, there is no way
>> to access it from within __class_init__. You say that some workarounds
>> would have to be done to handle cooperative multiple inheritance, yet
>> given that there is no way to tamper with __class__ I cannot see how
>> such a workaround should look like.
>>
>> I would love to be proven wrong on the above statements, as I
>> actually like PEP 422. But back-portability, in my opinion, is actually
>> very important. So, maybe someone out there manages to write a
>> nice (or even ugly) backport of PEP 422?
>
> Backportability is at best a nice-to-have when it comes to designing
> changes to core semantics, and it certainly doesn't justify additional
> design complexity.
>
> That's why its better to focus on making actual *usage* simpler, and
> it may be that a __subclass_init__ hook is simpler overall than a
> __class_init__. Certainly one point in its favour is that opting in to
> running the subclass init on the base class with a decorator would be
> relatively easy, while opting out of running __class_init__ is likely
> to be a bit tricky.

*sigh* I'm already seeing a definite problem with remembering whether
its init_class vs class_init, especially since the older Zope version
uses the latter name :P

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From lkb.teichmann at gmail.com  Sun Feb 22 12:35:44 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Sun, 22 Feb 2015 12:35:44 +0100
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CADiSq7dP7fgBV2S_uUJJJTGXEruxmXkGSe1LeSTRo_JryJz50Q@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>
 <CADiSq7dQAEfGiUY8wznd+p5KxBn6Fm3MXFTjDAddFvk55iUaGQ@mail.gmail.com>
 <CAK9R32R-PGho__+pfK34nhiUh4x-qMk7dYSYAjB2vskS=izmMQ@mail.gmail.com>
 <CADiSq7f3qNJgS=C6A_-6Rt1LCC=mJQWG7w1Oj-uWCrdF=suqyA@mail.gmail.com>
 <CAK9R32S6eM3zfAgSp1hnZLLgJ8aFcY8DPcfWWxX6f8jfB63gsw@mail.gmail.com>
 <CADiSq7fCU0xN68aFA+2FEkprx1wE6vy7uUz9o85h+nq3udp-jg@mail.gmail.com>
 <CAK9R32QrrCgON=jaQak9_OS_CJHHhik6WwjVr+SkgAw406hQyA@mail.gmail.com>
 <CADiSq7dP7fgBV2S_uUJJJTGXEruxmXkGSe1LeSTRo_JryJz50Q@mail.gmail.com>
Message-ID: <CAK9R32TzJDSnSnREtrf4ACO1xfjLYJ6nFPnpsX4j1jjujSMHDg@mail.gmail.com>

> Backportability is at best a nice-to-have when it comes to designing
> changes to core semantics, and it certainly doesn't justify additional
> design complexity.

Speaking of which, I just checked out Daniel's code which was
proposed as an implementation of PEP 422. To my astonishment,
it does not do what I (and others) had expected, it does not add
new functionality to type.__new__ or type.__init__, instead the new
machinery is in __builtins__.__build_class__. This has weird consequences,
e.g. if you create a new class by writing
type("Spam", (SpamBase,) {}), __class_init__ will not be called.
I consider this a really unexpected and not desirable behavior.
Sure, types.new_class has been modified accordingly, but
still.

So I think that the current version of PEP 422 unnecessarily adds
some unexpected complexity, all of which could be easily avoided
with my __init_subclass__ (or __subclass_init__? whatever)
approach.

Another detail: I think it could be useful if the additional
**kwargs to the class statement were forwarded to
__(sub)class_init__.

Greetings

Martin

PS there has been some discussion about politeness on this list.
I apologize for having been harsh. I hope we can all get
back to discussing python and how to improve it!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150222/0f43eb68/attachment.html>

From ncoghlan at gmail.com  Sun Feb 22 15:25:43 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 23 Feb 2015 00:25:43 +1000
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAK9R32TzJDSnSnREtrf4ACO1xfjLYJ6nFPnpsX4j1jjujSMHDg@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>
 <CADiSq7dQAEfGiUY8wznd+p5KxBn6Fm3MXFTjDAddFvk55iUaGQ@mail.gmail.com>
 <CAK9R32R-PGho__+pfK34nhiUh4x-qMk7dYSYAjB2vskS=izmMQ@mail.gmail.com>
 <CADiSq7f3qNJgS=C6A_-6Rt1LCC=mJQWG7w1Oj-uWCrdF=suqyA@mail.gmail.com>
 <CAK9R32S6eM3zfAgSp1hnZLLgJ8aFcY8DPcfWWxX6f8jfB63gsw@mail.gmail.com>
 <CADiSq7fCU0xN68aFA+2FEkprx1wE6vy7uUz9o85h+nq3udp-jg@mail.gmail.com>
 <CAK9R32QrrCgON=jaQak9_OS_CJHHhik6WwjVr+SkgAw406hQyA@mail.gmail.com>
 <CADiSq7dP7fgBV2S_uUJJJTGXEruxmXkGSe1LeSTRo_JryJz50Q@mail.gmail.com>
 <CAK9R32TzJDSnSnREtrf4ACO1xfjLYJ6nFPnpsX4j1jjujSMHDg@mail.gmail.com>
Message-ID: <CADiSq7d-wv_VapbAF6o74wwNYF9bZ9MSf-QWZ5C7NZdMPXNsPQ@mail.gmail.com>

On 22 Feb 2015 21:36, "Martin Teichmann" <lkb.teichmann at gmail.com> wrote:
>
>
> > Backportability is at best a nice-to-have when it comes to designing
> > changes to core semantics, and it certainly doesn't justify additional
> > design complexity.
>
> Speaking of which, I just checked out Daniel's code which was
> proposed as an implementation of PEP 422. To my astonishment,
> it does not do what I (and others) had expected, it does not add
> new functionality to type.__new__ or type.__init__, instead the new
> machinery is in __builtins__.__build_class__.

There are some specific details that can't be implemented properly in
Python 3 using a metaclass, that's one of the core points of the PEP.

Specifically, the __class__ cell can only be populated *after* the object
has already been created, so the new hook has to be introduced later in the
class initialisation process, or it will suffer from the same limitation
metaclasses do (i.e. zero-argument super() and any other references to
__class__ won't work).

>This has weird consequences, e.g. if you create a new class by writing
> type("Spam", (SpamBase,) {}), __class_init__ will not be called.

This code is inherently broken in Python 3, as it bypasses __prepare__
methods.

> I consider this a really unexpected and not desirable behavior.

That ship already sailed years ago when __prepare__ was introduced.

> Sure, types.new_class has been modified accordingly, but
> still.
>
> So I think that the current version of PEP 422 unnecessarily adds
> some unexpected complexity, all of which could be easily avoided
> with my __init_subclass__ (or __subclass_init__? whatever)
> approach.

I don't currently believe you're correct about this, but a PEP and
reference implementation may persuade me that you've found a different way
of achieving the various objectives of PEP 422.

> Another detail: I think it could be useful if the additional
> **kwargs to the class statement were forwarded to
> __(sub)class_init__.

No, that kind of variable signature makes cooperative multiple inheritance
a nightmare to implement. The PEP 422 hook is deliberately designed to act
like an inheritable class decorator that gets set from within the body of
the class definition.

It's *not* intended to provide the full power and flexibility offered by a
metaclass - it's designed to let folks that understand class decorators and
class methods combine the two concepts to define a class decorator that
gets called implicitly for both the current class and all subclasses.

Regards,
Nick.

>
> Greetings
>
> Martin
>
> PS there has been some discussion about politeness on this list.
> I apologize for having been harsh. I hope we can all get
> back to discussing python and how to improve it!
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150223/b5b360a8/attachment.html>

From ncoghlan at gmail.com  Sun Feb 22 15:54:45 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 23 Feb 2015 00:54:45 +1000
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CADiSq7d-wv_VapbAF6o74wwNYF9bZ9MSf-QWZ5C7NZdMPXNsPQ@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>
 <CADiSq7dQAEfGiUY8wznd+p5KxBn6Fm3MXFTjDAddFvk55iUaGQ@mail.gmail.com>
 <CAK9R32R-PGho__+pfK34nhiUh4x-qMk7dYSYAjB2vskS=izmMQ@mail.gmail.com>
 <CADiSq7f3qNJgS=C6A_-6Rt1LCC=mJQWG7w1Oj-uWCrdF=suqyA@mail.gmail.com>
 <CAK9R32S6eM3zfAgSp1hnZLLgJ8aFcY8DPcfWWxX6f8jfB63gsw@mail.gmail.com>
 <CADiSq7fCU0xN68aFA+2FEkprx1wE6vy7uUz9o85h+nq3udp-jg@mail.gmail.com>
 <CAK9R32QrrCgON=jaQak9_OS_CJHHhik6WwjVr+SkgAw406hQyA@mail.gmail.com>
 <CADiSq7dP7fgBV2S_uUJJJTGXEruxmXkGSe1LeSTRo_JryJz50Q@mail.gmail.com>
 <CAK9R32TzJDSnSnREtrf4ACO1xfjLYJ6nFPnpsX4j1jjujSMHDg@mail.gmail.com>
 <CADiSq7d-wv_VapbAF6o74wwNYF9bZ9MSf-QWZ5C7NZdMPXNsPQ@mail.gmail.com>
Message-ID: <CADiSq7cH3dqfx1RwYm2sutbrazXNW+8SyEPPyrBg1_jOBiAyfQ@mail.gmail.com>

On 23 February 2015 at 00:25, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On 22 Feb 2015 21:36, "Martin Teichmann" <lkb.teichmann at gmail.com> wrote:
>> Another detail: I think it could be useful if the additional
>> **kwargs to the class statement were forwarded to
>> __(sub)class_init__.
>
> No, that kind of variable signature makes cooperative multiple inheritance a
> nightmare to implement. The PEP 422 hook is deliberately designed to act
> like an inheritable class decorator that gets set from within the body of
> the class definition.
>
> It's *not* intended to provide the full power and flexibility offered by a
> metaclass - it's designed to let folks that understand class decorators and
> class methods combine the two concepts to define a class decorator that gets
> called implicitly for both the current class and all subclasses.

Ah, thank you: this discussion kicked a new naming idea lose in my
brain, so I just pushed an update to PEP 422 that renames the proposed
__init_class__ hook to be called __autodecorate__ instead.

"It's an implicitly invoked class decorator that gets inherited by
subclasses" is really the best way to conceive of the proposed hook,
so having "init" as part of the name not only made it hard to remember
whether the hook was __init_class__ or __class_init__, it also got
people thinking along completely the wrong lines.

By contrast, the "auto" in "autodecorate" only makes sense as a
prefix, and the inclusion of the word "decorate" in the name means it
should be far more effective at triggering the reader's "class
decorator" mental model rather than their "class initialisation" one.

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From jsbueno at python.org.br  Sun Feb 22 16:15:21 2015
From: jsbueno at python.org.br (Joao S. O. Bueno)
Date: Sun, 22 Feb 2015 12:15:21 -0300
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CADiSq7cH3dqfx1RwYm2sutbrazXNW+8SyEPPyrBg1_jOBiAyfQ@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>
 <CADiSq7dQAEfGiUY8wznd+p5KxBn6Fm3MXFTjDAddFvk55iUaGQ@mail.gmail.com>
 <CAK9R32R-PGho__+pfK34nhiUh4x-qMk7dYSYAjB2vskS=izmMQ@mail.gmail.com>
 <CADiSq7f3qNJgS=C6A_-6Rt1LCC=mJQWG7w1Oj-uWCrdF=suqyA@mail.gmail.com>
 <CAK9R32S6eM3zfAgSp1hnZLLgJ8aFcY8DPcfWWxX6f8jfB63gsw@mail.gmail.com>
 <CADiSq7fCU0xN68aFA+2FEkprx1wE6vy7uUz9o85h+nq3udp-jg@mail.gmail.com>
 <CAK9R32QrrCgON=jaQak9_OS_CJHHhik6WwjVr+SkgAw406hQyA@mail.gmail.com>
 <CADiSq7dP7fgBV2S_uUJJJTGXEruxmXkGSe1LeSTRo_JryJz50Q@mail.gmail.com>
 <CAK9R32TzJDSnSnREtrf4ACO1xfjLYJ6nFPnpsX4j1jjujSMHDg@mail.gmail.com>
 <CADiSq7d-wv_VapbAF6o74wwNYF9bZ9MSf-QWZ5C7NZdMPXNsPQ@mail.gmail.com>
 <CADiSq7cH3dqfx1RwYm2sutbrazXNW+8SyEPPyrBg1_jOBiAyfQ@mail.gmail.com>
Message-ID: <CAH0mxTQ3g0jg_5PKsp_xS4Ut_BFVReXxc_HgNUemO2xgmUSLeQ@mail.gmail.com>

On 22 February 2015 at 11:54, Nick Coghlan <ncoghlan at gmail.com> wrote:
>
> Ah, thank you: this discussion kicked a new naming idea lose in my
> brain, so I just pushed an update to PEP 422 that renames the proposed
> __init_class__ hook to be called __autodecorate__ instead.
>
> "It's an implicitly invoked class decorator that gets inherited by
> subclasses" is really the best way to conceive of the proposed hook,
> so having "init" as part of the name not only made it hard to remember
> whether the hook was __init_class__ or __class_init__, it also got
> people thinking along completely the wrong lines.
>
> By contrast, the "auto" in "autodecorate" only makes sense as a
> prefix, and the inclusion of the word "decorate" in the name means it
> should be far more effective at triggering the reader's "class
> decorator" mental model rather than their "class initialisation" one.

Why not just "__decorate__" instead?

From liam.marsh.home at gmail.com  Sun Feb 22 18:36:03 2015
From: liam.marsh.home at gmail.com (Liam Marsh)
Date: Sun, 22 Feb 2015 18:36:03 +0100
Subject: [Python-ideas] pep 397
In-Reply-To: <m9cerb$tst$1@ger.gmane.org>
References: <CACPPHzsuATV4Gkn2fCE-P3Tr4frjQ9TuEATU5cg6q3vzR9pS0w@mail.gmail.com>
 <m9cerb$tst$1@ger.gmane.org>
Message-ID: <CACPPHzuaOyjjkUpwqYyaqEBWTkON24bi76sEcPestOpE1x=HJw@mail.gmail.com>

"On Windows, I pin the Start -> Pythonx.y -> Idle x.y icons to my task bar
so I can select either python + idle combination with a mouse click."
yes, but this doesn't fix the "right click -> edit with idle" problem
(maybe calling "py.exe --edit <file>" instead of "idle.bat <file>" on
windows will fix it)

2015-01-17 2:45 GMT+01:00 Terry Reedy <tjreedy at udel.edu>:

> On 12/30/2014 3:31 PM, Liam Marsh wrote:
>
>  /the pep 397 says that any python script is able to choose the language
>> version which will run it, between all  the versions installed on the
>> computer, using on windows a launcher in the "C:\windows" folder./
>>
>> can the idle version be chosen like this too, or can the idle "run"
>> command do it?
>>
>
> Question about using python, including Idle, should be directed to
> python-list. Quick answers to what seem to be separate questions.
>
> 1. At the command line, one select the version of a module to run by
> selecting the version of python to run.  Idle is no different from any
> other module intended to be directly run.  For me,
>
> C:\Users\Terry>py -2 -m idlelib.idle  # starts Idle in 2.7.9
>
> C:\Users\Terry>py -3 -m idlelib  # starts Idle in 3.4.2
>
> On Windows, I pin the Start -> Pythonx.y -> Idle x.y icons to my task bar
> so I can select either python + idle combination with a mouse click.
>
> 2. Selecting F5 or Run-module in an Idle editor runs the editor buffer
> contents with the same version of python as is running Idle.  I have
> considered adding 'Run with x.y' options according to what is installed,
> but that is pie-in-the-sky at the moment.
>
> --
> Terry Jan Reedy
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150222/162288d2/attachment.html>

From victor.stinner at gmail.com  Mon Feb 23 00:51:45 2015
From: victor.stinner at gmail.com (Victor Stinner)
Date: Mon, 23 Feb 2015 00:51:45 +0100
Subject: [Python-ideas] Adding ziplib
In-Reply-To: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>
References: <CAO41-mNr1=z_JT=aKjBbOZzrYV5Uc=GG_P5No5mcgNNVMVMcOw@mail.gmail.com>
Message-ID: <CAMpsgwbG2cTdK1FeO6_bhe5PQm4Yo_nkyXe6Ac34Ty7en3+cnA@mail.gmail.com>

For simple use cases, there is shutil:
https://docs.python.org/dev/library/shutil.html#archiving-operations

Victor
Le 17 f?vr. 2015 00:11, "Ryan Gonzalez" <rymg19 at gmail.com> a ?crit :

> Python has support for several compression methods, which is great...sort
> of.
>
> Recently, I had to port an application that used zips to use tar+gzip. It
> *sucked*. Why? Inconsistency.
>
> For instance, zipfile uses write to add files, tar uses add. Then, tar has
> addfile (which confusingly does not add a file), and zip has writestr, for
> which a tar alternative has been proposed on the bug tracker
> <http://bugs.python.org/issue22208> but hasn't had any activity for a
> while.
>
> Then, zipfile and tarfile's extract methods have slightly different
> arguments. Zipinfo has namelist, and tarfile as getnames. It's all a mess.
>
> The idea is to reorganize Python's zip support just like the sha and md5
> modules were put into the hashlib module. Name? ziplib.
>
> It would have a few ABCs:
>
> - an ArchiveFile class that ZipFile and TarFile would derive from.
> - a BasicArchiveFile class that ArchiveFile extends. bz2, lzma, and
> (maybe) gzip would be derived from this one.
> - ArchiveFileInfo, which ZipInfo and TarInfo would derive from.
> ArchiveFile and ArchiveFileInfo go hand-in-hand.
>
> Now, the APIs would be more consistent. Of course, some have methods that
> others don't have, but the usage would be easier. zlib would be completely
> separate, since it has its own API and all.
>
> I'm not quite sure how gzip fits into this, since it has a very minimal
> API.
>
> Thoughts?
>
> --
> Ryan
> If anybody ever asks me why I prefer C++ to C, my answer will be simple:
> "It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was
> nul-terminated."
> Personal reality distortion fields are immune to contradictory evidence. -
> srean
> Check out my website: http://kirbyfan64.github.io/
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150223/517d4d9a/attachment.html>

From tjreedy at udel.edu  Mon Feb 23 07:56:14 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Mon, 23 Feb 2015 01:56:14 -0500
Subject: [Python-ideas] pep 397
In-Reply-To: <CACPPHzuaOyjjkUpwqYyaqEBWTkON24bi76sEcPestOpE1x=HJw@mail.gmail.com>
References: <CACPPHzsuATV4Gkn2fCE-P3Tr4frjQ9TuEATU5cg6q3vzR9pS0w@mail.gmail.com>
 <m9cerb$tst$1@ger.gmane.org>
 <CACPPHzuaOyjjkUpwqYyaqEBWTkON24bi76sEcPestOpE1x=HJw@mail.gmail.com>
Message-ID: <mceiui$ram$1@ger.gmane.org>

On 2/22/2015 12:36 PM, Liam Marsh wrote:
> "On Windows, I pin the Start -> Pythonx.y -> Idle x.y icons to my task
> bar so I can select either python + idle combination with a mouse click."
> yes, but this doesn't fix the "right click -> edit with idle" problem

As far as I know, that may be a Windows-only feature.  In any case, it 
is an installation feature, not a Python feature.  (You could 
potentially change it yourself.)  Feel free to open an issue on the 
tracker to change the Windows installer to install 'Edit with Idle x.y' 
for each Python version installed and add steve.dower as nosy.

-- 
Terry Jan Reedy


From victor.stinner at gmail.com  Mon Feb 23 11:44:59 2015
From: victor.stinner at gmail.com (Victor Stinner)
Date: Mon, 23 Feb 2015 11:44:59 +0100
Subject: [Python-ideas] Hooks into the IO system to intercept raw file
	reads/writes
In-Reply-To: <CACac1F-=uuck9YkzSW2=pA-6DsZfwut_7p-2T1PsDbdF4S5Myw@mail.gmail.com>
References: <CACac1F8Nm9Sp+Hs1O+TgOJfsDDTRttomUB8DoZWHr7UvEb3E2g@mail.gmail.com>
 <CAP7+vJJYS6GW1SEf3Y=ESi9e3zW5usXyWzoX6j1e2ACjbZNghw@mail.gmail.com>
 <CACac1F9SXRT88-WxpiF5j6QsPOzCbu2KGAAWCQSZn3J70BF0xg@mail.gmail.com>
 <CAP7+vJLaCDQUgN9u+wkFrkC3zfRrE=WG-78PsL5R4Bw__u3Mtg@mail.gmail.com>
 <CACac1F-ZZzBK4aO=qmD=o4zyNAZhy2bHNcOTuifTdaL7jdvKeQ@mail.gmail.com>
 <54D1D92E.7050102@canterbury.ac.nz>
 <CACac1F_-nm95YaGUrccK+mnpndGvzw6Wfsb4p=teZoFvObUG=w@mail.gmail.com>
 <CACac1F8AO2NHrDmLqa=c1165KZ3HaxTWd5_NqT=dehxJ5cRD2g@mail.gmail.com>
 <54D2034B.2000205@canterbury.ac.nz>
 <CACac1F_3TpHnbcFaf6TLHtpN1AOYaHvfdUPKEVJuhY+7wpEHhw@mail.gmail.com>
 <54D27F33.3050204@canterbury.ac.nz>
 <CAMpsgwa3yoe2+EhnDidF3zFWgShMErkmt8ECEiYh_76G-KYvnw@mail.gmail.com>
 <CACac1F8neUCGL7fx6LTVM5zhBQOD=LeymO_ZtDukWVs8jTUiOA@mail.gmail.com>
 <CAMpsgwYf9kfT5HcdjMXyiVRUuK1W-6iwimrj00kSPKwsVvnEdg@mail.gmail.com>
 <CACac1F-=uuck9YkzSW2=pA-6DsZfwut_7p-2T1PsDbdF4S5Myw@mail.gmail.com>
Message-ID: <CAMpsgwYnzORbDg52Y9LooPwZ4Get4TgvM=aiO15FAeCes4-Zpw@mail.gmail.com>

2015-02-10 15:49 GMT+01:00 Paul Moore <p.f.moore at gmail.com>:
>> Which code samples? I tried to ensure that all code snippets in
>> asyncio doc and all examples in Tulip repository call loop.close().
>
> For example, in
> https://docs.python.org/dev/library/asyncio-dev.html#detect-exceptions-never-consumed
> the first example doesn't close the loop. Neither of the fixes given
> close the loop either.

Oh ok, I forgot two examples. It's now fixed:
https://hg.python.org/cpython/rev/05bd5ec8365e

Tell me if you see more examples where I forgot to explicitly close
the event loop.

> Hmm, so if I write a server and hit Ctrl-C to exit it, what happens?
> That's how non-asyncio things like http.server typically let the user
> quit.

When possible, it's better to register a signal handler to ask to stop
the event loop.

(Oh, I forgot this old thread about hooking I/O! You should start a
new thread if you would like to talk about asyncio!)

Victor

From victor.stinner at gmail.com  Mon Feb 23 11:49:43 2015
From: victor.stinner at gmail.com (Victor Stinner)
Date: Mon, 23 Feb 2015 11:49:43 +0100
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
Message-ID: <CAMpsgwZARxo+c5AAqfs=jfVrGmycD1uMtk2ciw89yTwAWKCLqw@mail.gmail.com>

2015-02-16 13:53 GMT+01:00 Luciano Ramalho <luciano at ramalho.org>:
> At a high level, the behavior of send() would be like this:
>
> def send(coroutine, value):
>     if inspect.getgeneratorstate() == 'GEN_CREATED':
>         next(coroutine)
>     coroutine.send(value)

It's strange to have to sometimes run one iterations of the generator,
sometimes two iterations.

asyncio.Task is a nice wrapper on top of coroutines, you never use
coro.send() explicitly. It makes coroutines easier to use.

Victor

From luciano at ramalho.org  Mon Feb 23 12:23:29 2015
From: luciano at ramalho.org (Luciano Ramalho)
Date: Mon, 23 Feb 2015 08:23:29 -0300
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <CAMpsgwZARxo+c5AAqfs=jfVrGmycD1uMtk2ciw89yTwAWKCLqw@mail.gmail.com>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
 <CAMpsgwZARxo+c5AAqfs=jfVrGmycD1uMtk2ciw89yTwAWKCLqw@mail.gmail.com>
Message-ID: <CALxg4FVZx9iOsJ5aCYMDwrOADC5LEkcaKn6PWo6YjXnEBegHYA@mail.gmail.com>

On Mon, Feb 23, 2015 at 7:49 AM, Victor Stinner
<victor.stinner at gmail.com> wrote:
> 2015-02-16 13:53 GMT+01:00 Luciano Ramalho <luciano at ramalho.org>:
>> At a high level, the behavior of send() would be like this:
>>
>> def send(coroutine, value):
>>     if inspect.getgeneratorstate() == 'GEN_CREATED':
>>         next(coroutine)
>>     coroutine.send(value)
>
> It's strange to have to sometimes run one iterations of the generator,
> sometimes two iterations.

The idea is to handle generators that may or may not be primed. I
updated that snippet, it now reads like this:

https://gist.github.com/ramalho/c1f7df10308a4bd67198#file-send_builtin-py-L43

> asyncio.Task is a nice wrapper on top of coroutines, you never use
> coro.send() explicitly. It makes coroutines easier to use.

Yes it is, thanks! My intent was to build something that was not tied
to the asyncio event loop, to make coroutines in general easier to
use.

Thanks for your response, Victor.

Best,

Luciano

-- 
Luciano Ramalho
Twitter: @ramalhoorg

Professor em: http://python.pro.br
Twitter: @pythonprobr

From victor.stinner at gmail.com  Mon Feb 23 14:12:00 2015
From: victor.stinner at gmail.com (Victor Stinner)
Date: Mon, 23 Feb 2015 14:12:00 +0100
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <CALxg4FVZx9iOsJ5aCYMDwrOADC5LEkcaKn6PWo6YjXnEBegHYA@mail.gmail.com>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
 <CAMpsgwZARxo+c5AAqfs=jfVrGmycD1uMtk2ciw89yTwAWKCLqw@mail.gmail.com>
 <CALxg4FVZx9iOsJ5aCYMDwrOADC5LEkcaKn6PWo6YjXnEBegHYA@mail.gmail.com>
Message-ID: <CAMpsgwYuet2EDzUWJVmZKw5cCHjPa33pwiTYJAZmnatggM0ZnQ@mail.gmail.com>

The use case for your send() function is unclear to me. Why not using
send() directly?

inspect.getgeneratorstate(generator) == 'GEN_CREATED' test looks weird
(why do you need it?). You should already know if the generator
started or not.

Victor

2015-02-23 12:23 GMT+01:00 Luciano Ramalho <luciano at ramalho.org>:
> On Mon, Feb 23, 2015 at 7:49 AM, Victor Stinner
> <victor.stinner at gmail.com> wrote:
>> 2015-02-16 13:53 GMT+01:00 Luciano Ramalho <luciano at ramalho.org>:
>>> At a high level, the behavior of send() would be like this:
>>>
>>> def send(coroutine, value):
>>>     if inspect.getgeneratorstate() == 'GEN_CREATED':
>>>         next(coroutine)
>>>     coroutine.send(value)
>>
>> It's strange to have to sometimes run one iterations of the generator,
>> sometimes two iterations.
>
> The idea is to handle generators that may or may not be primed. I
> updated that snippet, it now reads like this:
>
> https://gist.github.com/ramalho/c1f7df10308a4bd67198#file-send_builtin-py-L43
>
>> asyncio.Task is a nice wrapper on top of coroutines, you never use
>> coro.send() explicitly. It makes coroutines easier to use.
>
> Yes it is, thanks! My intent was to build something that was not tied
> to the asyncio event loop, to make coroutines in general easier to
> use.
>
> Thanks for your response, Victor.
>
> Best,
>
> Luciano
>
> --
> Luciano Ramalho
> Twitter: @ramalhoorg
>
> Professor em: http://python.pro.br
> Twitter: @pythonprobr

From ncoghlan at gmail.com  Mon Feb 23 14:16:24 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 23 Feb 2015 23:16:24 +1000
Subject: [Python-ideas] A way out of Meta-hell (was: A (meta)class
	algebra)
In-Reply-To: <CAH0mxTQ3g0jg_5PKsp_xS4Ut_BFVReXxc_HgNUemO2xgmUSLeQ@mail.gmail.com>
References: <CAK9R32TNjkkJpxA6LC5GqU0digZv9TQZ=wuveA+CvjaxqBD50w@mail.gmail.com>
 <CA+=+wqAz6C5_QCLe2Cfc-uFi-5hWP7B-M9Q2yqi2K75UHDstcQ@mail.gmail.com>
 <CAK9R32RovaEawxzFf03Kq1f+WgC4NOvhWhxjLOwjyaSt90Nr5A@mail.gmail.com>
 <CADiSq7d0APs4c7y1pUJhdPQtmLssCdGaAfLVTe7NaZJX4960Zw@mail.gmail.com>
 <CAK9R32Ts=8YhWpyKVhJGrnD_2US0S6B+nZAEokRtz6s8ysnQcQ@mail.gmail.com>
 <CADiSq7dQAEfGiUY8wznd+p5KxBn6Fm3MXFTjDAddFvk55iUaGQ@mail.gmail.com>
 <CAK9R32R-PGho__+pfK34nhiUh4x-qMk7dYSYAjB2vskS=izmMQ@mail.gmail.com>
 <CADiSq7f3qNJgS=C6A_-6Rt1LCC=mJQWG7w1Oj-uWCrdF=suqyA@mail.gmail.com>
 <CAK9R32S6eM3zfAgSp1hnZLLgJ8aFcY8DPcfWWxX6f8jfB63gsw@mail.gmail.com>
 <CADiSq7fCU0xN68aFA+2FEkprx1wE6vy7uUz9o85h+nq3udp-jg@mail.gmail.com>
 <CAK9R32QrrCgON=jaQak9_OS_CJHHhik6WwjVr+SkgAw406hQyA@mail.gmail.com>
 <CADiSq7dP7fgBV2S_uUJJJTGXEruxmXkGSe1LeSTRo_JryJz50Q@mail.gmail.com>
 <CAK9R32TzJDSnSnREtrf4ACO1xfjLYJ6nFPnpsX4j1jjujSMHDg@mail.gmail.com>
 <CADiSq7d-wv_VapbAF6o74wwNYF9bZ9MSf-QWZ5C7NZdMPXNsPQ@mail.gmail.com>
 <CADiSq7cH3dqfx1RwYm2sutbrazXNW+8SyEPPyrBg1_jOBiAyfQ@mail.gmail.com>
 <CAH0mxTQ3g0jg_5PKsp_xS4Ut_BFVReXxc_HgNUemO2xgmUSLeQ@mail.gmail.com>
Message-ID: <CADiSq7feTETydem8Gdae4mS+0rBJZwDA7pwUS=tsK7YuXQwmEQ@mail.gmail.com>

On 23 February 2015 at 01:15, Joao S. O. Bueno <jsbueno at python.org.br> wrote:
> On 22 February 2015 at 11:54, Nick Coghlan <ncoghlan at gmail.com> wrote:
>>
>> Ah, thank you: this discussion kicked a new naming idea lose in my
>> brain, so I just pushed an update to PEP 422 that renames the proposed
>> __init_class__ hook to be called __autodecorate__ instead.
>>
>> "It's an implicitly invoked class decorator that gets inherited by
>> subclasses" is really the best way to conceive of the proposed hook,
>> so having "init" as part of the name not only made it hard to remember
>> whether the hook was __init_class__ or __class_init__, it also got
>> people thinking along completely the wrong lines.
>>
>> By contrast, the "auto" in "autodecorate" only makes sense as a
>> prefix, and the inclusion of the word "decorate" in the name means it
>> should be far more effective at triggering the reader's "class
>> decorator" mental model rather than their "class initialisation" one.
>
> Why not just "__decorate__" instead?

The first reason is because the immediate question that springs to
mind for me with that name is "decorate what?". The "auto" prefix
hints at the implicit nature of the decoration, as well as the fact
the decoration is inherited.

The second is because the hook is only relevant to the new implicit
decoration proposal, and not to decorators in general. Calling the
hook by the more general name __decorate__ suggests that it may also
play a role in explicit decorator invocation, and that's not the case.

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From luciano at ramalho.org  Mon Feb 23 14:22:05 2015
From: luciano at ramalho.org (Luciano Ramalho)
Date: Mon, 23 Feb 2015 10:22:05 -0300
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <CAMpsgwYuet2EDzUWJVmZKw5cCHjPa33pwiTYJAZmnatggM0ZnQ@mail.gmail.com>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
 <CAMpsgwZARxo+c5AAqfs=jfVrGmycD1uMtk2ciw89yTwAWKCLqw@mail.gmail.com>
 <CALxg4FVZx9iOsJ5aCYMDwrOADC5LEkcaKn6PWo6YjXnEBegHYA@mail.gmail.com>
 <CAMpsgwYuet2EDzUWJVmZKw5cCHjPa33pwiTYJAZmnatggM0ZnQ@mail.gmail.com>
Message-ID: <CALxg4FV5h=Y=x3A2wsKyTp_jZAsU8Js4Ze2ZK+EZA3hLsMGFHg@mail.gmail.com>

On Mon, Feb 23, 2015 at 10:12 AM, Victor Stinner
<victor.stinner at gmail.com> wrote:
> The use case for your send() function is unclear to me. Why not using
> send() directly?

Bacause the generator may not have started yet, so gen.send() would fail.

> inspect.getgeneratorstate(generator) == 'GEN_CREATED' test looks weird
> (why do you need it?).

To know that the generator must be primed.

> You should already know if the generator
> started or not.

I would, if I had written the generator myself. But my code may need
to work with a generator that was created by some code that I do not
control.

Thanks for your interest, Victor.

Best,

Luciano




>
> Victor
>
> 2015-02-23 12:23 GMT+01:00 Luciano Ramalho <luciano at ramalho.org>:
>> On Mon, Feb 23, 2015 at 7:49 AM, Victor Stinner
>> <victor.stinner at gmail.com> wrote:
>>> 2015-02-16 13:53 GMT+01:00 Luciano Ramalho <luciano at ramalho.org>:
>>>> At a high level, the behavior of send() would be like this:
>>>>
>>>> def send(coroutine, value):
>>>>     if inspect.getgeneratorstate() == 'GEN_CREATED':
>>>>         next(coroutine)
>>>>     coroutine.send(value)
>>>
>>> It's strange to have to sometimes run one iterations of the generator,
>>> sometimes two iterations.
>>
>> The idea is to handle generators that may or may not be primed. I
>> updated that snippet, it now reads like this:
>>
>> https://gist.github.com/ramalho/c1f7df10308a4bd67198#file-send_builtin-py-L43
>>
>>> asyncio.Task is a nice wrapper on top of coroutines, you never use
>>> coro.send() explicitly. It makes coroutines easier to use.
>>
>> Yes it is, thanks! My intent was to build something that was not tied
>> to the asyncio event loop, to make coroutines in general easier to
>> use.
>>
>> Thanks for your response, Victor.
>>
>> Best,
>>
>> Luciano
>>
>> --
>> Luciano Ramalho
>> Twitter: @ramalhoorg
>>
>> Professor em: http://python.pro.br
>> Twitter: @pythonprobr



-- 
Luciano Ramalho
Twitter: @ramalhoorg

Professor em: http://python.pro.br
Twitter: @pythonprobr

From ron3200 at gmail.com  Mon Feb 23 16:55:39 2015
From: ron3200 at gmail.com (Ron Adam)
Date: Mon, 23 Feb 2015 10:55:39 -0500
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <CALxg4FV5h=Y=x3A2wsKyTp_jZAsU8Js4Ze2ZK+EZA3hLsMGFHg@mail.gmail.com>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
 <CAMpsgwZARxo+c5AAqfs=jfVrGmycD1uMtk2ciw89yTwAWKCLqw@mail.gmail.com>
 <CALxg4FVZx9iOsJ5aCYMDwrOADC5LEkcaKn6PWo6YjXnEBegHYA@mail.gmail.com>
 <CAMpsgwYuet2EDzUWJVmZKw5cCHjPa33pwiTYJAZmnatggM0ZnQ@mail.gmail.com>
 <CALxg4FV5h=Y=x3A2wsKyTp_jZAsU8Js4Ze2ZK+EZA3hLsMGFHg@mail.gmail.com>
Message-ID: <mcfihs$e4s$1@ger.gmane.org>



On 02/23/2015 08:22 AM, Luciano Ramalho wrote:
> On Mon, Feb 23, 2015 at 10:12 AM, Victor Stinner
> <victor.stinner at gmail.com>  wrote:
>> >The use case for your send() function is unclear to me. Why not using
>> >send() directly?
> Bacause the generator may not have started yet, so gen.send() would fail.
>
>> >inspect.getgeneratorstate(generator) == 'GEN_CREATED' test looks weird
>> >(why do you need it?).
> To know that the generator must be primed.
>
>> >You should already know if the generator
>> >started or not.

> I would, if I had written the generator myself. But my code may need
> to work with a generator that was created by some code that I do not
> control.

When using coroutines, they are generally run by a co-routine framework. 
That is how they become "CO"-routines rather than just generators that you 
can send values to.  Co-routines generally use the yield to pause/continue 
and to communicate to the framework, and not to get and receive data.  In 
most case I've seen co-routines look and work very much like functions 
except they have yield put in places so they cooperate with the frame work 
that is running them.  That frame work handles the starting co-routine.


I think what you are referring to is generators that are both providers and 
consumers, and not co-routines.

Cheers,
    Ron













From luciano at ramalho.org  Mon Feb 23 20:10:15 2015
From: luciano at ramalho.org (Luciano Ramalho)
Date: Mon, 23 Feb 2015 16:10:15 -0300
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <mcfihs$e4s$1@ger.gmane.org>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
 <CAMpsgwZARxo+c5AAqfs=jfVrGmycD1uMtk2ciw89yTwAWKCLqw@mail.gmail.com>
 <CALxg4FVZx9iOsJ5aCYMDwrOADC5LEkcaKn6PWo6YjXnEBegHYA@mail.gmail.com>
 <CAMpsgwYuet2EDzUWJVmZKw5cCHjPa33pwiTYJAZmnatggM0ZnQ@mail.gmail.com>
 <CALxg4FV5h=Y=x3A2wsKyTp_jZAsU8Js4Ze2ZK+EZA3hLsMGFHg@mail.gmail.com>
 <mcfihs$e4s$1@ger.gmane.org>
Message-ID: <CALxg4FVB4Jpmnbvoaz6TAMpRZxOGQMj01ZGXbLCLofH2b_=VHg@mail.gmail.com>

On Mon, Feb 23, 2015 at 12:55 PM, Ron Adam <ron3200 at gmail.com> wrote:
> When using coroutines, they are generally run by a co-routine framework.
> That is how they become "CO"-routines rather than just generators that you
> can send values to.  Co-routines generally use the yield to pause/continue
> and to communicate to the framework, and not to get and receive data.  In
> most case I've seen co-routines look and work very much like functions
> except they have yield put in places so they cooperate with the frame work
> that is running them.  That frame work handles the starting co-routine.
>
>
> I think what you are referring to is generators that are both providers and
> consumers, and not co-routines.

Thanks, Ron, this was very helpful, and makes perfect sense to me.

The framework also handles the other feature that distinguishes
coroutines from consuming generators: returning values.

In a sense, coroutines are not first-class citizens in Python, only
generators are. Coroutines need some supporting framework to be
useful. That's partly what I was trying to address with my `send()`
function. I now see the issue is much deeper than I previously
thought.

Thanks for the discussion, folks!

Best,

Luciano


-- 
Luciano Ramalho
Twitter: @ramalhoorg

Professor em: http://python.pro.br
Twitter: @pythonprobr

From abarnert at yahoo.com  Mon Feb 23 22:27:07 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Mon, 23 Feb 2015 13:27:07 -0800
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <CALxg4FVB4Jpmnbvoaz6TAMpRZxOGQMj01ZGXbLCLofH2b_=VHg@mail.gmail.com>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
 <CAMpsgwZARxo+c5AAqfs=jfVrGmycD1uMtk2ciw89yTwAWKCLqw@mail.gmail.com>
 <CALxg4FVZx9iOsJ5aCYMDwrOADC5LEkcaKn6PWo6YjXnEBegHYA@mail.gmail.com>
 <CAMpsgwYuet2EDzUWJVmZKw5cCHjPa33pwiTYJAZmnatggM0ZnQ@mail.gmail.com>
 <CALxg4FV5h=Y=x3A2wsKyTp_jZAsU8Js4Ze2ZK+EZA3hLsMGFHg@mail.gmail.com>
 <mcfihs$e4s$1@ger.gmane.org>
 <CALxg4FVB4Jpmnbvoaz6TAMpRZxOGQMj01ZGXbLCLofH2b_=VHg@mail.gmail.com>
Message-ID: <78800E96-4522-48D3-858F-434EECD2FDD4@yahoo.com>

On Feb 23, 2015, at 11:10, Luciano Ramalho <luciano at ramalho.org> wrote:

> On Mon, Feb 23, 2015 at 12:55 PM, Ron Adam <ron3200 at gmail.com> wrote:
>> When using coroutines, they are generally run by a co-routine framework.
>> That is how they become "CO"-routines rather than just generators that you
>> can send values to.  Co-routines generally use the yield to pause/continue
>> and to communicate to the framework, and not to get and receive data.  In
>> most case I've seen co-routines look and work very much like functions
>> except they have yield put in places so they cooperate with the frame work
>> that is running them.  That frame work handles the starting co-routine.
>> 
>> 
>> I think what you are referring to is generators that are both providers and
>> consumers, and not co-routines.
> 
> Thanks, Ron, this was very helpful, and makes perfect sense to me.
> 
> The framework also handles the other feature that distinguishes
> coroutines from consuming generators: returning values.
> 
> In a sense, coroutines are not first-class citizens in Python, only
> generators are. Coroutines need some supporting framework to be
> useful.

No, Python has first-class coroutines. 

And coroutines don't need a supporting framework to be useful. For example, a consumer can kick off a producer and the two of them can yield back and forth with no outside help. Or you can compose them into a pipeline, or build a manual state machine, or do all the other usual coroutine things with them.

But coroutines _do_ need some supporting framework to be useful as general semi-implicit cooperative threads (a la greenlet servers, classic Mac/Windows/wx GUI apps built around WaitNextEvent, non-parallel actor-model designs, etc.). If each coroutine has no idea who's supposed to run next, you need a scheduler framework to keep track of that.

But because Python has real coroutines, that framework can be written in pure Python. As with asyncio.

> That's partly what I was trying to address with my `send()`
> function. I now see the issue is much deeper than I previously
> thought.
> 
> Thanks for the discussion, folks!
> 
> Best,
> 
> Luciano
> 
> 
> -- 
> Luciano Ramalho
> Twitter: @ramalhoorg
> 
> Professor em: http://python.pro.br
> Twitter: @pythonprobr
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From abarnert at yahoo.com  Mon Feb 23 22:40:41 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Mon, 23 Feb 2015 13:40:41 -0800
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <78800E96-4522-48D3-858F-434EECD2FDD4@yahoo.com>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
 <CAMpsgwZARxo+c5AAqfs=jfVrGmycD1uMtk2ciw89yTwAWKCLqw@mail.gmail.com>
 <CALxg4FVZx9iOsJ5aCYMDwrOADC5LEkcaKn6PWo6YjXnEBegHYA@mail.gmail.com>
 <CAMpsgwYuet2EDzUWJVmZKw5cCHjPa33pwiTYJAZmnatggM0ZnQ@mail.gmail.com>
 <CALxg4FV5h=Y=x3A2wsKyTp_jZAsU8Js4Ze2ZK+EZA3hLsMGFHg@mail.gmail.com>
 <mcfihs$e4s$1@ger.gmane.org>
 <CALxg4FVB4Jpmnbvoaz6TAMpRZxOGQMj01ZGXbLCLofH2b_=VHg@mail.gmail.com>
 <78800E96-4522-48D3-858F-434EECD2FDD4@yahoo.com>
Message-ID: <2B8C3FA0-D512-4F84-9BB7-082300435428@yahoo.com>

On Feb 23, 2015, at 13:27, Andrew Barnert <abarnert at yahoo.com.dmarc.invalid> wrote:

> On Feb 23, 2015, at 11:10, Luciano Ramalho <luciano at ramalho.org> wrote:
> 
>> On Mon, Feb 23, 2015 at 12:55 PM, Ron Adam <ron3200 at gmail.com> wrote:
>>> When using coroutines, they are generally run by a co-routine framework.
>>> That is how they become "CO"-routines rather than just generators that you
>>> can send values to.  Co-routines generally use the yield to pause/continue
>>> and to communicate to the framework, and not to get and receive data.  In
>>> most case I've seen co-routines look and work very much like functions
>>> except they have yield put in places so they cooperate with the frame work
>>> that is running them.  That frame work handles the starting co-routine.
>>> 
>>> 
>>> I think what you are referring to is generators that are both providers and
>>> consumers, and not co-routines.
>> 
>> Thanks, Ron, this was very helpful, and makes perfect sense to me.
>> 
>> The framework also handles the other feature that distinguishes
>> coroutines from consuming generators: returning values.
>> 
>> In a sense, coroutines are not first-class citizens in Python, only
>> generators are. Coroutines need some supporting framework to be
>> useful.
> 
> No, Python has first-class coroutines. 

I just realized that you _could_ mean something different here, which would be more accurate:

In a few languages, either the execution stack (e.g., most Smalltalks--and most assembly languages, I guess) or continuations 
(e.g., Scheme, SML) are first-class objects, which means first-class coroutines can be written purely within the language. Which means you don't have to write fully-general coroutines, you can use different limited implementations that better fit your current application (e.g., embedding a scheduling discipline under the covers that acts like a trampoline but is invisible in the app code). Python doesn't have that.

But then I'm not sure how your send would be relevant to that meaning, so I still think you mean something like what I said before, based on a misunderstanding of what first-class coroutines can do.

> And coroutines don't need a supporting framework to be useful. For example, a consumer can kick off a producer and the two of them can yield back and forth with no outside help. Or you can compose them into a pipeline, or build a manual state machine, or do all the other usual coroutine things with them.
> 
> But coroutines _do_ need some supporting framework to be useful as general semi-implicit cooperative threads (a la greenlet servers, classic Mac/Windows/wx GUI apps built around WaitNextEvent, non-parallel actor-model designs, etc.). If each coroutine has no idea who's supposed to run next, you need a scheduler framework to keep track of that.
> 
> But because Python has real coroutines, that framework can be written in pure Python. As with asyncio.
> 
>> That's partly what I was trying to address with my `send()`
>> function. I now see the issue is much deeper than I previously
>> thought.
>> 
>> Thanks for the discussion, folks!
>> 
>> Best,
>> 
>> Luciano
>> 
>> 
>> -- 
>> Luciano Ramalho
>> Twitter: @ramalhoorg
>> 
>> Professor em: http://python.pro.br
>> Twitter: @pythonprobr
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/

From steve at pearwood.info  Tue Feb 24 12:00:22 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Tue, 24 Feb 2015 22:00:22 +1100
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <CALxg4FV5h=Y=x3A2wsKyTp_jZAsU8Js4Ze2ZK+EZA3hLsMGFHg@mail.gmail.com>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
 <CAMpsgwZARxo+c5AAqfs=jfVrGmycD1uMtk2ciw89yTwAWKCLqw@mail.gmail.com>
 <CALxg4FVZx9iOsJ5aCYMDwrOADC5LEkcaKn6PWo6YjXnEBegHYA@mail.gmail.com>
 <CAMpsgwYuet2EDzUWJVmZKw5cCHjPa33pwiTYJAZmnatggM0ZnQ@mail.gmail.com>
 <CALxg4FV5h=Y=x3A2wsKyTp_jZAsU8Js4Ze2ZK+EZA3hLsMGFHg@mail.gmail.com>
Message-ID: <20150224110021.GL7655@ando.pearwood.info>

On Mon, Feb 23, 2015 at 10:22:05AM -0300, Luciano Ramalho wrote:
> On Mon, Feb 23, 2015 at 10:12 AM, Victor Stinner
> <victor.stinner at gmail.com> wrote:
> > The use case for your send() function is unclear to me. Why not using
> > send() directly?
> 
> Bacause the generator may not have started yet, so gen.send() would fail.
> 
> > inspect.getgeneratorstate(generator) == 'GEN_CREATED' test looks weird
> > (why do you need it?).
> 
> To know that the generator must be primed.
> 
> > You should already know if the generator
> > started or not.
> 
> I would, if I had written the generator myself. But my code may need
> to work with a generator that was created by some code that I do not
> control.

I must admit I don't understand the problem here. Here's a 
simple coroutine:

def runningsum():
    total = 0
    x = (yield None)
    while True:
        total += x
        x = (yield total)


If I have a function which expects a *primed* coroutine, and you provide 
an unprimed one, we get an obvious error:

py> def handle_primed(co):
...     for i in (1, 3, 5):
...             y = co.send(i)
...     return y
...
py> handle_primed(runningsum())  # Oops, unprimed!
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 3, in handle_primed
TypeError: can't send non-None value to a just-started generator


And if I have a function which expects an *unprimed* coroutine, and you 
provide a primed one, we will usually get an exception:

py> def handle_unprimed(co):
...     next(co)
...     for i in (1, 3, 5):
...             y = co.send(i)
...     return y
...
py> rs = runningsum()
py> next(rs)  # Oops, primed!
py> handle_unprimed(rs)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 2, in handle_unprimed
  File "<stdin>", line 5, in runningsum
TypeError: unsupported operand type(s) for +=: 'int' and 'NoneType'


I say "usually" because I suppose there are some coroutines which might 
accept being sent None and fail to cause an error. But they should be 
pretty rare, and in this case I think "consenting adults" should apply: 
if I write such a function, I can always inspect the coroutine myself.

So it seems to me that this is a case for documentation, not a built-in 
function. Document what you expect, and then it is up to the caller to 
provide what you ask for.

Have I missed something? Can you give an example of when you would 
legitimately want to accept coroutines regardless of whether they are 
primed or not, but don't want to explicitly inspect the coroutine 
yourself?



-- 
Steve

From aj at erisian.com.au  Tue Feb 24 13:56:49 2015
From: aj at erisian.com.au (Anthony Towns)
Date: Tue, 24 Feb 2015 22:56:49 +1000
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <20150224110021.GL7655@ando.pearwood.info>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
 <CAMpsgwZARxo+c5AAqfs=jfVrGmycD1uMtk2ciw89yTwAWKCLqw@mail.gmail.com>
 <CALxg4FVZx9iOsJ5aCYMDwrOADC5LEkcaKn6PWo6YjXnEBegHYA@mail.gmail.com>
 <CAMpsgwYuet2EDzUWJVmZKw5cCHjPa33pwiTYJAZmnatggM0ZnQ@mail.gmail.com>
 <CALxg4FV5h=Y=x3A2wsKyTp_jZAsU8Js4Ze2ZK+EZA3hLsMGFHg@mail.gmail.com>
 <20150224110021.GL7655@ando.pearwood.info>
Message-ID: <CAJS_LCWZ_NavAz5ukcokL6iuAjzWGcQjU+k33ic32jLnB6v4VA@mail.gmail.com>

On 24 February 2015 at 21:00, Steven D'Aprano <steve at pearwood.info> wrote:

> If I have a function which expects a *primed* coroutine, and you provide
> an unprimed one, we get an obvious error:
>
> py> def handle_primed(co):
> ...     for i in (1, 3, 5):
> ...             y = co.send(i)
> ...     return y
> ...
> py> handle_primed(runningsum())  # Oops, unprimed!
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
>   File "<stdin>", line 3, in handle_primed
> TypeError: can't send non-None value to a just-started generator
>

?Why would you ever want to use that function "unprimed", though? Writing
"next(rs)" or "rs.send(None)" just seems like busywork? to me; it's not
something I'd write if I were writing pseudocode, so it's something I wish
python didn't force me to write in order to use that language feature...

?But... you could just use a decorator to make that go away, I think? As in
having a decorator along the lines of:

  def primed_coroutine(f):
      @wraps(f)
      def fprime(*args, **kwargs):
          co = f(*args, **kwargs)
          next(co)
          return co
      return fprime

so that you could write and use a coroutine just by having:

  @primed_coroutine
  def runningsum():
      total = 0
      while True:
          total += (yield total)?

?  handle_primed(runningsum())

?At least, now that I've thought of it, that's how I plan to write any
generator-based coroutines in future...

Cheers,
aj?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150224/8c94f69b/attachment.html>

From steve at pearwood.info  Tue Feb 24 16:54:17 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Wed, 25 Feb 2015 02:54:17 +1100
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <CAJS_LCWZ_NavAz5ukcokL6iuAjzWGcQjU+k33ic32jLnB6v4VA@mail.gmail.com>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
 <CAMpsgwZARxo+c5AAqfs=jfVrGmycD1uMtk2ciw89yTwAWKCLqw@mail.gmail.com>
 <CALxg4FVZx9iOsJ5aCYMDwrOADC5LEkcaKn6PWo6YjXnEBegHYA@mail.gmail.com>
 <CAMpsgwYuet2EDzUWJVmZKw5cCHjPa33pwiTYJAZmnatggM0ZnQ@mail.gmail.com>
 <CALxg4FV5h=Y=x3A2wsKyTp_jZAsU8Js4Ze2ZK+EZA3hLsMGFHg@mail.gmail.com>
 <20150224110021.GL7655@ando.pearwood.info>
 <CAJS_LCWZ_NavAz5ukcokL6iuAjzWGcQjU+k33ic32jLnB6v4VA@mail.gmail.com>
Message-ID: <20150224155415.GM7655@ando.pearwood.info>

On Tue, Feb 24, 2015 at 10:56:49PM +1000, Anthony Towns wrote:
> On 24 February 2015 at 21:00, Steven D'Aprano <steve at pearwood.info> wrote:
> 
> > If I have a function which expects a *primed* coroutine, and you provide
> > an unprimed one, we get an obvious error:
[...]
> ?Why would you ever want to use that function "unprimed", though? 

I wouldn't. If I wrote a function that expected a coroutine as argument, 
I would always expect that the caller would prime the coroutine before 
sending it to me. If they forget, they will get an exception. But 
Luciano seems to wish to accept both primed and unprimed coroutines, so 
I'm just following his requirements as I understand them.

[...]
> ?But... you could just use a decorator to make that go away, I think? As in
> having a decorator along the lines of:
> 
>   def primed_coroutine(f):
>       @wraps(f)
>       def fprime(*args, **kwargs):
>           co = f(*args, **kwargs)
>           next(co)
>           return co
>       return fprime

Indeed. That is exactly what I do in my code, so all my coroutines are 
primed as soon as they are created and I never have to worry about 
them being unprimed. My code is virtually the same as yours, except I 
call the decorator "coroutine".

@coroutine
def running_sum():
    ...


makes it clear that running_sum is a coroutine, not a regular function 
or generator.

Although the decorator is quite simple, I think that *every* user of 
coroutines should be using it. Are there any use-cases for unprimed 
coroutines? Perhaps this decorator should become a built-in, or at least 
in functools?



-- 
Steve

From Steve.Dower at microsoft.com  Tue Feb 24 17:25:17 2015
From: Steve.Dower at microsoft.com (Steve Dower)
Date: Tue, 24 Feb 2015 16:25:17 +0000
Subject: [Python-ideas] A send() built-in function to drive coroutines
In-Reply-To: <20150224155415.GM7655@ando.pearwood.info>
References: <CALxg4FUe2-KvzDoh2G=u2ZDiUoLrWFPg8+AFVFA8LNcE+guFXQ@mail.gmail.com>
 <CAMpsgwZARxo+c5AAqfs=jfVrGmycD1uMtk2ciw89yTwAWKCLqw@mail.gmail.com>
 <CALxg4FVZx9iOsJ5aCYMDwrOADC5LEkcaKn6PWo6YjXnEBegHYA@mail.gmail.com>
 <CAMpsgwYuet2EDzUWJVmZKw5cCHjPa33pwiTYJAZmnatggM0ZnQ@mail.gmail.com>
 <CALxg4FV5h=Y=x3A2wsKyTp_jZAsU8Js4Ze2ZK+EZA3hLsMGFHg@mail.gmail.com>
 <20150224110021.GL7655@ando.pearwood.info>
 <CAJS_LCWZ_NavAz5ukcokL6iuAjzWGcQjU+k33ic32jLnB6v4VA@mail.gmail.com>,
 <20150224155415.GM7655@ando.pearwood.info>
Message-ID: <DM2PR0301MB0734F203585094ABF673B86AF5160@DM2PR0301MB0734.namprd03.prod.outlook.com>

I believe "yield from" is the case where you need it unprimed. We went through a few rounds of this as asyncio was being prototyped, and ended up without priming because of yield from.

Cheers,
Steve

Top-posted from my Windows Phone
________________________________
From: Steven D'Aprano<mailto:steve at pearwood.info>
Sent: ?2/?24/?2015 7:55
To: python-ideas at python.org<mailto:python-ideas at python.org>
Subject: Re: [Python-ideas] A send() built-in function to drive coroutines

On Tue, Feb 24, 2015 at 10:56:49PM +1000, Anthony Towns wrote:
> On 24 February 2015 at 21:00, Steven D'Aprano <steve at pearwood.info> wrote:
>
> > If I have a function which expects a *primed* coroutine, and you provide
> > an unprimed one, we get an obvious error:
[...]
> ?Why would you ever want to use that function "unprimed", though?

I wouldn't. If I wrote a function that expected a coroutine as argument,
I would always expect that the caller would prime the coroutine before
sending it to me. If they forget, they will get an exception. But
Luciano seems to wish to accept both primed and unprimed coroutines, so
I'm just following his requirements as I understand them.

[...]
> ?But... you could just use a decorator to make that go away, I think? As in
> having a decorator along the lines of:
>
>   def primed_coroutine(f):
>       @wraps(f)
>       def fprime(*args, **kwargs):
>           co = f(*args, **kwargs)
>           next(co)
>           return co
>       return fprime

Indeed. That is exactly what I do in my code, so all my coroutines are
primed as soon as they are created and I never have to worry about
them being unprimed. My code is virtually the same as yours, except I
call the decorator "coroutine".

@coroutine
def running_sum():
    ...


makes it clear that running_sum is a coroutine, not a regular function
or generator.

Although the decorator is quite simple, I think that *every* user of
coroutines should be using it. Are there any use-cases for unprimed
coroutines? Perhaps this decorator should become a built-in, or at least
in functools?



--
Steve
_______________________________________________
Python-ideas mailing list
Python-ideas at python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150224/69a0b980/attachment-0001.html>

From contact at ionelmc.ro  Wed Feb 25 00:43:08 2015
From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=)
Date: Wed, 25 Feb 2015 01:43:08 +0200
Subject: [Python-ideas] Comparable exceptions
Message-ID: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>

Hey everyone,

I was writing a test the other day and I wanted to assert that some
function raises some exception, but with some exact value. Turns out you
cannot compare OSError("foobar") to OSError("foobar") as it doesn't have
any __eq__. (it's the same for all exceptions?)

I think exceptions are great candidates to have an __eq__ method - they
already have a __hash__ (that's computed from the contained values) and
they are already containers in a way (they all have the `args` attribute).

Comparing exceptions in a test assertion seems like a good usecase for this.

Thanks,
-- Ionel Cristian M?rie?, blog.ionelmc.ro
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/871b358f/attachment.html>

From guido at python.org  Wed Feb 25 00:55:20 2015
From: guido at python.org (Guido van Rossum)
Date: Tue, 24 Feb 2015 15:55:20 -0800
Subject: [Python-ideas] Comparable exceptions
In-Reply-To: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>
References: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>
Message-ID: <CAP7+vJK5Kttb7M6hivYQHgHLcU9iJGiNQcy==OdqozgDibdJ5g@mail.gmail.com>

Actually, they don't have a __hash__ either -- they just inherit the
default __hash__ from object, just as they inherit __eq__ from object (and
both just use the address of the object). I can see many concerns with
trying to define __eq__ for exceptions in general -- often there are a
variety of fields, not all of which perhaps should be considered when
comparing, and then there are user-defined exceptions which may have other
attributes. I don't know what your use case was, but if, as you say, the
exceptions you care about already have `args` attributes, maybe you should
just compare that (and the type).

On Tue, Feb 24, 2015 at 3:43 PM, Ionel Cristian M?rie? <contact at ionelmc.ro>
wrote:

> Hey everyone,
>
> I was writing a test the other day and I wanted to assert that some
> function raises some exception, but with some exact value. Turns out you
> cannot compare OSError("foobar") to OSError("foobar") as it doesn't have
> any __eq__. (it's the same for all exceptions?)
>
> I think exceptions are great candidates to have an __eq__ method - they
> already have a __hash__ (that's computed from the contained values) and
> they are already containers in a way (they all have the `args` attribute).
>
> Comparing exceptions in a test assertion seems like a good usecase for
> this.
>
> Thanks,
> -- Ionel Cristian M?rie?, blog.ionelmc.ro
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150224/63956b74/attachment.html>

From rosuav at gmail.com  Wed Feb 25 00:57:27 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Wed, 25 Feb 2015 10:57:27 +1100
Subject: [Python-ideas] Comparable exceptions
In-Reply-To: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>
References: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>
Message-ID: <CAPTjJmr__0C9ng5pFbLvEv2yXWUEKhBxoQW_MOJZ9aOBqhj=Rw@mail.gmail.com>

On Wed, Feb 25, 2015 at 10:43 AM, Ionel Cristian M?rie?
<contact at ionelmc.ro> wrote:
> I think exceptions are great candidates to have an __eq__ method - they
> already have a __hash__ (that's computed from the contained values) and they
> are already containers in a way (they all have the `args` attribute).

Are you sure their hashes are computed from contained values? I just
tried it, generated two exceptions that should be indistinguishable,
and they had different hashes:

rosuav at sikorsky:~$ python3
Python 3.5.0a0 (default:4709290253e3, Jan 20 2015, 21:48:07)
[GCC 4.7.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> try: open("/root/asdf")
... except OSError as e: e1=e
...
>>> try: open("/root/asdf")
... except OSError as e: e2=e
...
>>> hash(e1)
-9223363249876220580
>>> hash(e2)
-9223363249876220572
>>> type(e1).__hash__
<slot wrapper '__hash__' of 'object' objects>

I suspect the last line implies that there's no __hash__ defined
anywhere in the exception hierarchy, and it's using the default hash
implementation from object(). In any case, those two exceptions appear
to have different hashes, and they have the same args and filename.

ChrisA

From rymg19 at gmail.com  Wed Feb 25 00:57:32 2015
From: rymg19 at gmail.com (Ryan Gonzalez)
Date: Tue, 24 Feb 2015 17:57:32 -0600
Subject: [Python-ideas] Comparable exceptions
In-Reply-To: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>
References: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>
Message-ID: <CAO41-mNUMPjrzDDd5xgvyp_Gh25qzezY3Q2eTwMporPVsntNPA@mail.gmail.com>

Doesn't unittest already cover this?

with self.assertRaisesRegexp(OSError, '^foobar$'):
    do_something()


On Tue, Feb 24, 2015 at 5:43 PM, Ionel Cristian M?rie? <contact at ionelmc.ro>
wrote:

> Hey everyone,
>
> I was writing a test the other day and I wanted to assert that some
> function raises some exception, but with some exact value. Turns out you
> cannot compare OSError("foobar") to OSError("foobar") as it doesn't have
> any __eq__. (it's the same for all exceptions?)
>
> I think exceptions are great candidates to have an __eq__ method - they
> already have a __hash__ (that's computed from the contained values) and
> they are already containers in a way (they all have the `args` attribute).
>
> Comparing exceptions in a test assertion seems like a good usecase for
> this.
>
> Thanks,
> -- Ionel Cristian M?rie?, blog.ionelmc.ro
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
Ryan
If anybody ever asks me why I prefer C++ to C, my answer will be simple:
"It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was
nul-terminated."
Personal reality distortion fields are immune to contradictory evidence. -
srean
Check out my website: http://kirbyfan64.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150224/f674c8af/attachment.html>

From contact at ionelmc.ro  Wed Feb 25 01:01:30 2015
From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=)
Date: Wed, 25 Feb 2015 02:01:30 +0200
Subject: [Python-ideas] Comparable exceptions
In-Reply-To: <CAP7+vJK5Kttb7M6hivYQHgHLcU9iJGiNQcy==OdqozgDibdJ5g@mail.gmail.com>
References: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>
 <CAP7+vJK5Kttb7M6hivYQHgHLcU9iJGiNQcy==OdqozgDibdJ5g@mail.gmail.com>
Message-ID: <CANkHFr85x7Ysb8jNOCjNRFqQTG5ae3XB9hL_bwFfj98=ahdV2w@mail.gmail.com>

Yes i can compare args but that becomes extremely inconvenient whe there's
a hide data structure and the exception instances are deep down.

What other problems would there be if we add __eq__ on Exception?

User exceptions would be free to define their own __eq__.

On Wednesday, February 25, 2015, Guido van Rossum <guido at python.org> wrote:

> Actually, they don't have a __hash__ either -- they just inherit the
> default __hash__ from object, just as they inherit __eq__ from object (and
> both just use the address of the object). I can see many concerns with
> trying to define __eq__ for exceptions in general -- often there are a
> variety of fields, not all of which perhaps should be considered when
> comparing, and then there are user-defined exceptions which may have other
> attributes. I don't know what your use case was, but if, as you say, the
> exceptions you care about already have `args` attributes, maybe you should
> just compare that (and the type).
>
> On Tue, Feb 24, 2015 at 3:43 PM, Ionel Cristian M?rie? <contact at ionelmc.ro
> <javascript:_e(%7B%7D,'cvml','contact at ionelmc.ro');>> wrote:
>
>> Hey everyone,
>>
>> I was writing a test the other day and I wanted to assert that some
>> function raises some exception, but with some exact value. Turns out you
>> cannot compare OSError("foobar") to OSError("foobar") as it doesn't have
>> any __eq__. (it's the same for all exceptions?)
>>
>> I think exceptions are great candidates to have an __eq__ method - they
>> already have a __hash__ (that's computed from the contained values) and
>> they are already containers in a way (they all have the `args` attribute).
>>
>> Comparing exceptions in a test assertion seems like a good usecase for
>> this.
>>
>> Thanks,
>> -- Ionel Cristian M?rie?, blog.ionelmc.ro
>>
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> <javascript:_e(%7B%7D,'cvml','Python-ideas at python.org');>
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
>
>
> --
> --Guido van Rossum (python.org/~guido)
>


-- 

Thanks,
-- Ionel Cristian M?rie?, blog.ionelmc.ro
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/b08e728e/attachment-0001.html>

From storchaka at gmail.com  Wed Feb 25 08:51:48 2015
From: storchaka at gmail.com (Serhiy Storchaka)
Date: Wed, 25 Feb 2015 09:51:48 +0200
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
Message-ID: <mcjuuk$2pt$1@ger.gmane.org>

What you are think about turning deprecation warnings on by default in 
the interactive interpreter?

Deprecation warnings are silent by default because they just annoys end 
user that uses old Python program with new Python. End user usually is 
not a programmer and can't do something with warnings. But in the 
interactive interpreter the user is the author of executed code and can 
avoid to use deprecated functions if warned.


From storchaka at gmail.com  Wed Feb 25 09:12:03 2015
From: storchaka at gmail.com (Serhiy Storchaka)
Date: Wed, 25 Feb 2015 10:12:03 +0200
Subject: [Python-ideas] Reprs of classes and functions
Message-ID: <mck04k$kvd$1@ger.gmane.org>

This idea is already casually mentioned, but sank deep into the threads 
of the discussion. Raise it up.

Currently reprs of classes and functions look as:

 >>> int
<class 'int'>
 >>> int.from_bytes
<built-in method from_bytes of type object at 0x826cf60>
 >>> open
<built-in function open>
 >>> import collections
 >>> collections.Counter
<class 'collections.Counter'>
 >>> collections.Counter.fromkeys
<bound method Counter.fromkeys of <class 'collections.Counter'>>
 >>> collections.namedtuple
<function namedtuple at 0xb6fc4adc>

What if change default reprs of classes and functions to just full 
qualified name __module__ + '.' + __qualname__ (or just __qualname__ if 
__module__ is builtins)? This will look more neatly. And such reprs are 
evaluable.


From ben+python at benfinney.id.au  Wed Feb 25 09:39:50 2015
From: ben+python at benfinney.id.au (Ben Finney)
Date: Wed, 25 Feb 2015 19:39:50 +1100
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
References: <mcjuuk$2pt$1@ger.gmane.org>
Message-ID: <85wq365qqx.fsf@benfinney.id.au>

Serhiy Storchaka <storchaka at gmail.com>
writes:

> What you are think about turning deprecation warnings on by default in
> the interactive interpreter?

In its favour: Exposing problems early, when they can easily be fixed,
is good.

To its detriment: Making the interactive interpreter behave differently
by default from the non-interactive interpreter should be resisted; code
which behaves a certain way by default in one should behave the same way
in the other, without extremely compelling justification.

I'm willing to hear more argument for and against.

-- 
 \       ?Remember: every member of your ?target audience? also owns a |
  `\   broadcasting station. These ?targets? can shoot back.? ?Michael |
_o__)               Rathbun to advertisers, news.admin.net-abuse.email |
Ben Finney


From steve at pearwood.info  Wed Feb 25 10:55:59 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Wed, 25 Feb 2015 20:55:59 +1100
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <85wq365qqx.fsf@benfinney.id.au>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
Message-ID: <20150225095559.GP7655@ando.pearwood.info>

On Wed, Feb 25, 2015 at 07:39:50PM +1100, Ben Finney wrote:
> Serhiy Storchaka <storchaka at gmail.com>
> writes:
> 
> > What you are think about turning deprecation warnings on by default in
> > the interactive interpreter?
> 
> In its favour: Exposing problems early, when they can easily be fixed,
> is good.
> 
> To its detriment: Making the interactive interpreter behave differently
> by default from the non-interactive interpreter should be resisted; code
> which behaves a certain way by default in one should behave the same way
> in the other, without extremely compelling justification.

The ship has sailed on that one. In the interactive interpreter:

py> 23  # Evaluating bare objects prints them.
23
py> _
23
py>


But non-interactively:

[steve at ando ~]$ python -c "23
> _"
Traceback (most recent call last):
  File "<string>", line 2, in <module>
NameError: name '_' is not defined



I think there are three questions that should be asked:

(1) How does one easy and simply enable warnings in the interactive 
interpeter?

(2) If they are enabled by default, how does one easily and simply 
disable the again? E.g. from the command line itself, or from your 
PYTHONSTARTUP file.

(3) Should they be enabled?

I can't give an opinion on (3) until I know the answer to (1) and (2). I 
tried looking at the warnings module and the sys module but nothing 
stands out to me.


-- 
Steve

From ben+python at benfinney.id.au  Wed Feb 25 11:01:37 2015
From: ben+python at benfinney.id.au (Ben Finney)
Date: Wed, 25 Feb 2015 21:01:37 +1100
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
Message-ID: <85oaoi5mym.fsf@benfinney.id.au>

Steven D'Aprano <steve at pearwood.info> writes:

> On Wed, Feb 25, 2015 at 07:39:50PM +1100, Ben Finney wrote:
> > To its detriment: Making the interactive interpreter behave differently
> > by default from the non-interactive interpreter should be resisted; code
> > which behaves a certain way by default in one should behave the same way
> > in the other, without extremely compelling justification.
>
> The ship has sailed on that one. [?]

Please note two things:

I don't claim there must be no differences, only that differences
proposed today must come with compellign justification.

The fact that there are already differences doesn't justify diverging
further. Whatever positive justification is offered, ?they already
diverge? cannot count for creating further divergence.

-- 
 \         ?I have the simplest tastes. I am always satisfied with the |
  `\                                               best.? ?Oscar Wilde |
_o__)                                                                  |
Ben Finney


From rosuav at gmail.com  Wed Feb 25 11:17:04 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Wed, 25 Feb 2015 21:17:04 +1100
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <85oaoi5mym.fsf@benfinney.id.au>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
 <85oaoi5mym.fsf@benfinney.id.au>
Message-ID: <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>

On Wed, Feb 25, 2015 at 9:01 PM, Ben Finney <ben+python at benfinney.id.au> wrote:
> Steven D'Aprano <steve at pearwood.info> writes:
>
>> On Wed, Feb 25, 2015 at 07:39:50PM +1100, Ben Finney wrote:
>> > To its detriment: Making the interactive interpreter behave differently
>> > by default from the non-interactive interpreter should be resisted; code
>> > which behaves a certain way by default in one should behave the same way
>> > in the other, without extremely compelling justification.
>>
>> The ship has sailed on that one. [?]
>
> Please note two things:
>
> I don't claim there must be no differences, only that differences
> proposed today must come with compellign justification.
>
> The fact that there are already differences doesn't justify diverging
> further. Whatever positive justification is offered, ?they already
> diverge? cannot count for creating further divergence.

Agreed; the difference does need justification. Here's justification:
Interactive execution places code and output right next to each other.
The warning would be emitted right at the time when the corresponding
code is entered.

Hmm. There are a few places where code gets pre-evaluated by the back
end (eg to figure out whether a continuation prompt is needed). Should
applicable SyntaxWarnings be emitted during such partial evaluation,
or should they be held over until the whole block can be parsed?

ChrisA

From p.f.moore at gmail.com  Wed Feb 25 11:18:50 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Wed, 25 Feb 2015 10:18:50 +0000
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <20150225095559.GP7655@ando.pearwood.info>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
Message-ID: <CACac1F_dZt7evOGi6X2bTOX3yhUDqdh3DR4oBFMFFC8vvCRhXw@mail.gmail.com>

On 25 February 2015 at 09:55, Steven D'Aprano <steve at pearwood.info> wrote:
> I think there are three questions that should be asked:
>
> (1) How does one easy and simply enable warnings in the interactive
> interpeter?
>
> (2) If they are enabled by default, how does one easily and simply
> disable the again? E.g. from the command line itself, or from your
> PYTHONSTARTUP file.
>
> (3) Should they be enabled?
>
> I can't give an opinion on (3) until I know the answer to (1) and (2). I
> tried looking at the warnings module and the sys module but nothing
> stands out to me.

Agreed. Also, what about 3rd party REPLs like IPython? Will they be
affected by the change, and will the enabling/disabling mechanism work
for them? (e.g. I don't think IPython checks PYTHONSTARTUP).

Another question which should probably be answered, how many
deprecation warnings are / would be active when this change is made?
One of the reasons for the change is clearly that deprecation warnings
aren't very easy to see at the moment, but that does mean it's
difficult to assess the impact of this change.

Paul

From p.f.moore at gmail.com  Wed Feb 25 11:21:05 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Wed, 25 Feb 2015 10:21:05 +0000
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
 <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
Message-ID: <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>

On 25 February 2015 at 10:17, Chris Angelico <rosuav at gmail.com> wrote:
> Agreed; the difference does need justification. Here's justification:
> Interactive execution places code and output right next to each other.
> The warning would be emitted right at the time when the corresponding
> code is entered.

On the other hand, what if "import <3rd party module>" flags a
deprecation warning because of something the module author does?
Certainly I can just ignore it (as it's "only" a warning) but it would
get annoying pretty fast if I couldn't suppress it...

Paul

From steve at pearwood.info  Wed Feb 25 11:25:21 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Wed, 25 Feb 2015 21:25:21 +1100
Subject: [Python-ideas] Reprs of classes and functions
In-Reply-To: <mck04k$kvd$1@ger.gmane.org>
References: <mck04k$kvd$1@ger.gmane.org>
Message-ID: <20150225102521.GQ7655@ando.pearwood.info>

On Wed, Feb 25, 2015 at 10:12:03AM +0200, Serhiy Storchaka wrote:
> This idea is already casually mentioned, but sank deep into the threads 
> of the discussion. Raise it up.
> 
> Currently reprs of classes and functions look as:
> 
> >>> int
> <class 'int'>
> >>> int.from_bytes
> <built-in method from_bytes of type object at 0x826cf60>
> >>> open
> <built-in function open>
> >>> import collections
> >>> collections.Counter
> <class 'collections.Counter'>
> >>> collections.Counter.fromkeys
> <bound method Counter.fromkeys of <class 'collections.Counter'>>
> >>> collections.namedtuple
> <function namedtuple at 0xb6fc4adc>
> 
> What if change default reprs of classes and functions to just full 
> qualified name __module__ + '.' + __qualname__ (or just __qualname__ if 
> __module__ is builtins)? This will look more neatly. And such reprs are 
> evaluable.

Do you mean like this?

repr(int)
=> 'int'

repr(int.from_bytes)
=> 'int.from_bytes'

repr(open)
=> 'open'

repr(collections.Counter)
=> 'collections.Counter'

repr(collections.Counter.fromkeys)
=> 'collections.Counter.fromkeys'

repr(collections.namedtuple)
=> 'collections.namedtuple'


-1 on that idea.

The suggested repr gives no clue as to what kind of object they are. Are 
they functions, methods, classes, some kind of Enum-like constant or 
something special like None? That hurts the usefulness of object reprs 
at the interactive interpreter. And it leads to weirdness like this:


def spam(x):
    if not isinstance(x, int):
        raise TypeError('expected int, got %r' % x)


# current behaviour
spam(int)
=> TypeError: expected int, got <class 'int'>

# proposed behaviour
spam(int)
=> TypeError: expected int, got int


As for them being evaluable, I don't think that's terribly useful. Do 
you have a use-case for that?

I'm not saying that it will never be useful, but I don't think it is 
useful enough to make up for the loss of human-readable information (the 
type of the object). Being able to eval(repr(instance)) is sometimes 
handy, but I don't think eval(repr(type_or_function)) is useful.



-- 
Steve

From rosuav at gmail.com  Wed Feb 25 11:35:57 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Wed, 25 Feb 2015 21:35:57 +1100
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
 <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
Message-ID: <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>

On Wed, Feb 25, 2015 at 9:21 PM, Paul Moore <p.f.moore at gmail.com> wrote:
> On 25 February 2015 at 10:17, Chris Angelico <rosuav at gmail.com> wrote:
>> Agreed; the difference does need justification. Here's justification:
>> Interactive execution places code and output right next to each other.
>> The warning would be emitted right at the time when the corresponding
>> code is entered.
>
> On the other hand, what if "import <3rd party module>" flags a
> deprecation warning because of something the module author does?
> Certainly I can just ignore it (as it's "only" a warning) but it would
> get annoying pretty fast if I couldn't suppress it...

That might be a good excuse to lean on the module's author or submit a
patch. :) But yes, that does present a problem, especially since
warnings will, by default, show only once (if I'm understanding the
docs correctly) - you ignore it in the import, and it's suppressed for
the rest of the session.

ChrisA

From contact at ionelmc.ro  Wed Feb 25 11:39:30 2015
From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=)
Date: Wed, 25 Feb 2015 12:39:30 +0200
Subject: [Python-ideas] Comparable exceptions
In-Reply-To: <CAPTjJmr__0C9ng5pFbLvEv2yXWUEKhBxoQW_MOJZ9aOBqhj=Rw@mail.gmail.com>
References: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>
 <CAPTjJmr__0C9ng5pFbLvEv2yXWUEKhBxoQW_MOJZ9aOBqhj=Rw@mail.gmail.com>
Message-ID: <CANkHFr_sDFwfJnvDieyMqNGanvb3baiYiakZ4QAmMWWhig5p4w@mail.gmail.com>

Ok, it looks I got confused about the hash (it's based on the memory
address and I wasn't saving the objects around, it was always the same
address, oops).




Thanks,
-- Ionel Cristian M?rie?, blog.ionelmc.ro

On Wed, Feb 25, 2015 at 1:57 AM, Chris Angelico <rosuav at gmail.com> wrote:

> On Wed, Feb 25, 2015 at 10:43 AM, Ionel Cristian M?rie?
> <contact at ionelmc.ro> wrote:
> > I think exceptions are great candidates to have an __eq__ method - they
> > already have a __hash__ (that's computed from the contained values) and
> they
> > are already containers in a way (they all have the `args` attribute).
>
> Are you sure their hashes are computed from contained values? I just
> tried it, generated two exceptions that should be indistinguishable,
> and they had different hashes:
>
> rosuav at sikorsky:~$ python3
> Python 3.5.0a0 (default:4709290253e3, Jan 20 2015, 21:48:07)
> [GCC 4.7.2] on linux
> Type "help", "copyright", "credits" or "license" for more information.
> >>> try: open("/root/asdf")
> ... except OSError as e: e1=e
> ...
> >>> try: open("/root/asdf")
> ... except OSError as e: e2=e
> ...
> >>> hash(e1)
> -9223363249876220580
> >>> hash(e2)
> -9223363249876220572
> >>> type(e1).__hash__
> <slot wrapper '__hash__' of 'object' objects>
>
> I suspect the last line implies that there's no __hash__ defined
> anywhere in the exception hierarchy, and it's using the default hash
> implementation from object(). In any case, those two exceptions appear
> to have different hashes, and they have the same args and filename.
>
> ChrisA
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/2a483e53/attachment.html>

From contact at ionelmc.ro  Wed Feb 25 11:46:57 2015
From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=)
Date: Wed, 25 Feb 2015 12:46:57 +0200
Subject: [Python-ideas] Comparable exceptions
In-Reply-To: <CAP7+vJK5Kttb7M6hivYQHgHLcU9iJGiNQcy==OdqozgDibdJ5g@mail.gmail.com>
References: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>
 <CAP7+vJK5Kttb7M6hivYQHgHLcU9iJGiNQcy==OdqozgDibdJ5g@mail.gmail.com>
Message-ID: <CANkHFr9Cks+=vSKzFOOHcCZfpoVvDtrOW5w9vHJjWWLXLZ=+wQ@mail.gmail.com>

What if we make the hash from .args and .__dict__? That would cover most
cases.

For me it makes sense because they look like containers. There might be
cases where a user doesn't want __eq__ or __hash__ for exception instances
- how about  we enable those methods just for the builtin types? And then
the user can explicitly enable them for custom exceptions (there could be a
special base ComparableException class that he could inherit from).


Thanks,
-- Ionel Cristian M?rie?, blog.ionelmc.ro

On Wed, Feb 25, 2015 at 1:55 AM, Guido van Rossum <guido at python.org> wrote:

> Actually, they don't have a __hash__ either -- they just inherit the
> default __hash__ from object, just as they inherit __eq__ from object (and
> both just use the address of the object). I can see many concerns with
> trying to define __eq__ for exceptions in general -- often there are a
> variety of fields, not all of which perhaps should be considered when
> comparing, and then there are user-defined exceptions which may have other
> attributes. I don't know what your use case was, but if, as you say, the
> exceptions you care about already have `args` attributes, maybe you should
> just compare that (and the type).
>
> On Tue, Feb 24, 2015 at 3:43 PM, Ionel Cristian M?rie? <contact at ionelmc.ro
> > wrote:
>
>> Hey everyone,
>>
>> I was writing a test the other day and I wanted to assert that some
>> function raises some exception, but with some exact value. Turns out you
>> cannot compare OSError("foobar") to OSError("foobar") as it doesn't have
>> any __eq__. (it's the same for all exceptions?)
>>
>> I think exceptions are great candidates to have an __eq__ method - they
>> already have a __hash__ (that's computed from the contained values) and
>> they are already containers in a way (they all have the `args` attribute).
>>
>> Comparing exceptions in a test assertion seems like a good usecase for
>> this.
>>
>> Thanks,
>> -- Ionel Cristian M?rie?, blog.ionelmc.ro
>>
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
>
>
> --
> --Guido van Rossum (python.org/~guido)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/29e65f4a/attachment-0001.html>

From p.f.moore at gmail.com  Wed Feb 25 11:58:38 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Wed, 25 Feb 2015 10:58:38 +0000
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
 <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
 <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>
Message-ID: <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>

On 25 February 2015 at 10:35, Chris Angelico <rosuav at gmail.com> wrote:
>> On the other hand, what if "import <3rd party module>" flags a
>> deprecation warning because of something the module author does?
>> Certainly I can just ignore it (as it's "only" a warning) but it would
>> get annoying pretty fast if I couldn't suppress it...
>
> That might be a good excuse to lean on the module's author or submit a
> patch. :) But yes, that does present a problem, especially since
> warnings will, by default, show only once (if I'm understanding the
> docs correctly) - you ignore it in the import, and it's suppressed for
> the rest of the session.

The important point here is that the idea behind this proposal is that
enabling deprecation warnings in the REPL is viable because the
warnings will be referring to your code, and therefore you can choose
whether or not to deal with them there and then. But 3rd party modules
are a case where that isn't the case.

Deprecation warnings were hidden by default in the first place because
the people who were affected by them were typically not the people who
controlled the code triggering them. I don't think just showing them
in the REPL alters that much.

Maybe the right solution is for testing frameworks (unittest, nose,
py.test) to force deprecation warnings on. Then developers will be
told about deprecation issues when they run their tests.

Paul

From python at 2sn.net  Wed Feb 25 12:02:07 2015
From: python at 2sn.net (Alexander Heger)
Date: Wed, 25 Feb 2015 22:02:07 +1100
Subject: [Python-ideas] Reprs of classes and functions
In-Reply-To: <20150225102521.GQ7655@ando.pearwood.info>
References: <mck04k$kvd$1@ger.gmane.org>
 <20150225102521.GQ7655@ando.pearwood.info>
Message-ID: <CAN3CYHxc=J2K5bmmA7GsT-xv17eMzkgOeK1a9Jem_zgEwa0qyg@mail.gmail.com>

In [2]: import collections

In [3]: collections.Counter.fromkeys
Out[3]: <bound method type.fromkeys of <class 'collections.Counter'>>

In [4]: f = collections.Counter.fromkeys

In [5]: f
Out[5]: <bound method type.fromkeys of <class 'collections.Counter'>>

From storchaka at gmail.com  Wed Feb 25 12:08:21 2015
From: storchaka at gmail.com (Serhiy Storchaka)
Date: Wed, 25 Feb 2015 13:08:21 +0200
Subject: [Python-ideas] Reprs of classes and functions
In-Reply-To: <20150225102521.GQ7655@ando.pearwood.info>
References: <mck04k$kvd$1@ger.gmane.org>
 <20150225102521.GQ7655@ando.pearwood.info>
Message-ID: <mckaf7$152$1@ger.gmane.org>

On 25.02.15 12:25, Steven D'Aprano wrote:
> On Wed, Feb 25, 2015 at 10:12:03AM +0200, Serhiy Storchaka wrote:
>> What if change default reprs of classes and functions to just full
>> qualified name __module__ + '.' + __qualname__ (or just __qualname__ if
>> __module__ is builtins)? This will look more neatly. And such reprs are
>> evaluable.
>
> Do you mean like this?
>
> repr(int)
> => 'int'
>
> repr(int.from_bytes)
> => 'int.from_bytes'

Yes, it is.

> -1 on that idea.
>
> The suggested repr gives no clue as to what kind of object they are. Are
> they functions, methods, classes, some kind of Enum-like constant or
> something special like None? That hurts the usefulness of object reprs
> at the interactive interpreter. And it leads to weirdness like this:
>
>
> def spam(x):
>      if not isinstance(x, int):
>          raise TypeError('expected int, got %r' % x)

This is uncommon case and bugprone code. Don't write so. 
spam('x'*1000000) or spam(()) will produce unhelpful error messages.

Usually it is written as:

def spam(x):
      if not isinstance(x, int):
          raise TypeError('expected int, got %s' % type(x).__name__)



From storchaka at gmail.com  Wed Feb 25 12:17:04 2015
From: storchaka at gmail.com (Serhiy Storchaka)
Date: Wed, 25 Feb 2015 13:17:04 +0200
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <20150225095559.GP7655@ando.pearwood.info>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
Message-ID: <mckavg$a93$1@ger.gmane.org>

On 25.02.15 11:55, Steven D'Aprano wrote:
> I think there are three questions that should be asked:
>
> (1) How does one easy and simply enable warnings in the interactive
> interpeter?

Same as for other warnings.

     import warnings; warnings.simplefilter('once', DeprecationWarning)

Or 'module', 'always', 'error'.

> (2) If they are enabled by default, how does one easily and simply
> disable the again? E.g. from the command line itself, or from your
> PYTHONSTARTUP file.

Same as for other warnings.

     import warnings; warnings.simplefilter('ignore', DeprecationWarning)



From ncoghlan at gmail.com  Wed Feb 25 13:51:52 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 25 Feb 2015 22:51:52 +1000
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
 <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
Message-ID: <CADiSq7f7nFTeE5iDJXWcnLZHAaMbD-cujFobXVKpypazL=kyTg@mail.gmail.com>

On 25 February 2015 at 20:21, Paul Moore <p.f.moore at gmail.com> wrote:
> On 25 February 2015 at 10:17, Chris Angelico <rosuav at gmail.com> wrote:
>> Agreed; the difference does need justification. Here's justification:
>> Interactive execution places code and output right next to each other.
>> The warning would be emitted right at the time when the corresponding
>> code is entered.
>
> On the other hand, what if "import <3rd party module>" flags a
> deprecation warning because of something the module author does?
> Certainly I can just ignore it (as it's "only" a warning) but it would
> get annoying pretty fast if I couldn't suppress it...

Ah, this is exactly the reason we turned them off in the first place
(only applied at the module level, rather than the whole application
level).

That puts me firmly in the -1 camp, for the same reason we turned them
off in the first place.

Turning them all on is a matter of passing "-Wall" at the command
line, but the command line help could stand to say that explicitly
(currently it only describes the full selective warning control
syntax, without mentioning the "all", "once" and "error" shorthands:
https://docs.python.org/3/using/cmdline.html#cmdoption-W)

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From ncoghlan at gmail.com  Wed Feb 25 13:54:57 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 25 Feb 2015 22:54:57 +1000
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
 <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
 <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>
 <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>
Message-ID: <CADiSq7d7sG4Y0iwRqAF-wWvj+TPO9Mh=G8k=Y_z3ayyeUdMDRg@mail.gmail.com>

On 25 February 2015 at 20:58, Paul Moore <p.f.moore at gmail.com> wrote:
> Maybe the right solution is for testing frameworks (unittest, nose,
> py.test) to force deprecation warnings on. Then developers will be
> told about deprecation issues when they run their tests.

This is already done. The environments that still hit issues these
days are thus those afflicted with untested code, like system
administrator's utility scripts, data analysis and applications
without(!) automated regression tests.

I still think it was the right call to make, but it was not without
its negative consequences :(

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From contact at ionelmc.ro  Wed Feb 25 14:03:40 2015
From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=)
Date: Wed, 25 Feb 2015 15:03:40 +0200
Subject: [Python-ideas] Comparable exceptions
In-Reply-To: <CAO41-mNUMPjrzDDd5xgvyp_Gh25qzezY3Q2eTwMporPVsntNPA@mail.gmail.com>
References: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>
 <CAO41-mNUMPjrzDDd5xgvyp_Gh25qzezY3Q2eTwMporPVsntNPA@mail.gmail.com>
Message-ID: <CANkHFr_RiT+08sD96JSFY0o9OJA2qLTV=3EMHYFq2--LpA=7-g@mail.gmail.com>

Yes, it does, however my assertion looks similar to this:

    assert result == [1,2, (3, 4, {"bubu": OSError('foobar')})]

I could implement my own "reliable_compare" function but I'd rather not do
that, as it seems like a workaround.



Thanks,
-- Ionel Cristian M?rie?, blog.ionelmc.ro

On Wed, Feb 25, 2015 at 1:57 AM, Ryan Gonzalez <rymg19 at gmail.com> wrote:

> Doesn't unittest already cover this?
>
> with self.assertRaisesRegexp(OSError, '^foobar$'):
>     do_something()
>
>
> On Tue, Feb 24, 2015 at 5:43 PM, Ionel Cristian M?rie? <contact at ionelmc.ro
> > wrote:
>
>> Hey everyone,
>>
>> I was writing a test the other day and I wanted to assert that some
>> function raises some exception, but with some exact value. Turns out you
>> cannot compare OSError("foobar") to OSError("foobar") as it doesn't have
>> any __eq__. (it's the same for all exceptions?)
>>
>> I think exceptions are great candidates to have an __eq__ method - they
>> already have a __hash__ (that's computed from the contained values) and
>> they are already containers in a way (they all have the `args` attribute).
>>
>> Comparing exceptions in a test assertion seems like a good usecase for
>> this.
>>
>> Thanks,
>> -- Ionel Cristian M?rie?, blog.ionelmc.ro
>>
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
>
>
> --
> Ryan
> If anybody ever asks me why I prefer C++ to C, my answer will be simple:
> "It's becauseslejfp23(@#Q*(E*EIdc-SEGFAULT. Wait, I don't think that was
> nul-terminated."
> Personal reality distortion fields are immune to contradictory evidence. -
> srean
> Check out my website: http://kirbyfan64.github.io/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/910c00d2/attachment-0001.html>

From steve at pearwood.info  Wed Feb 25 14:32:46 2015
From: steve at pearwood.info (Steven D'Aprano)
Date: Thu, 26 Feb 2015 00:32:46 +1100
Subject: [Python-ideas] Comparable exceptions
In-Reply-To: <CANkHFr_RiT+08sD96JSFY0o9OJA2qLTV=3EMHYFq2--LpA=7-g@mail.gmail.com>
References: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>
 <CAO41-mNUMPjrzDDd5xgvyp_Gh25qzezY3Q2eTwMporPVsntNPA@mail.gmail.com>
 <CANkHFr_RiT+08sD96JSFY0o9OJA2qLTV=3EMHYFq2--LpA=7-g@mail.gmail.com>
Message-ID: <20150225133245.GS7655@ando.pearwood.info>

On Wed, Feb 25, 2015 at 03:03:40PM +0200, Ionel Cristian M?rie? wrote:
> Yes, it does, however my assertion looks similar to this:
> 
>     assert result == [1,2, (3, 4, {"bubu": OSError('foobar')})]

I'm not keen on using assert like that. I think that using assert for 
testing is close to abuse of the statement, and it makes it impossible 
to test your code running with -O.

But regardless of whether that specific test is good practice or not, I 
think it is reasonable for exception instances to have a more useful 
__eq__ than that provided by inheriting from object. Perhaps add 
something like this to BaseException:

    def __eq__(self, other):
        if isinstance(other, type(self)):
            return self.args == other.args
        return NotImplemented




-- 
Steve

From guido at python.org  Wed Feb 25 17:18:26 2015
From: guido at python.org (Guido van Rossum)
Date: Wed, 25 Feb 2015 08:18:26 -0800
Subject: [Python-ideas] Comparable exceptions
In-Reply-To: <20150225133245.GS7655@ando.pearwood.info>
References: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>
 <CAO41-mNUMPjrzDDd5xgvyp_Gh25qzezY3Q2eTwMporPVsntNPA@mail.gmail.com>
 <CANkHFr_RiT+08sD96JSFY0o9OJA2qLTV=3EMHYFq2--LpA=7-g@mail.gmail.com>
 <20150225133245.GS7655@ando.pearwood.info>
Message-ID: <CAP7+vJ+9O0W7E3DGtthwf2RvbAvksZaZgG5zVrB_zkWTP+c07w@mail.gmail.com>

I am firmly -1 on any changes here. It would have been a nice idea when we
first introduced exceptions. Fixing this now isn't going to make much of a
difference, and the internal structure of exceptions is murky enough that
comparing 'args' alone doesn't cut it.

On Wed, Feb 25, 2015 at 5:32 AM, Steven D'Aprano <steve at pearwood.info>
wrote:

> On Wed, Feb 25, 2015 at 03:03:40PM +0200, Ionel Cristian M?rie? wrote:
> > Yes, it does, however my assertion looks similar to this:
> >
> >     assert result == [1,2, (3, 4, {"bubu": OSError('foobar')})]
>
> I'm not keen on using assert like that. I think that using assert for
> testing is close to abuse of the statement, and it makes it impossible
> to test your code running with -O.
>
> But regardless of whether that specific test is good practice or not, I
> think it is reasonable for exception instances to have a more useful
> __eq__ than that provided by inheriting from object. Perhaps add
> something like this to BaseException:
>
>     def __eq__(self, other):
>         if isinstance(other, type(self)):
>             return self.args == other.args
>         return NotImplemented
>
>
>
>
> --
> Steve
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/4720f864/attachment.html>

From guido at python.org  Wed Feb 25 17:21:25 2015
From: guido at python.org (Guido van Rossum)
Date: Wed, 25 Feb 2015 08:21:25 -0800
Subject: [Python-ideas] Reprs of classes and functions
In-Reply-To: <mck04k$kvd$1@ger.gmane.org>
References: <mck04k$kvd$1@ger.gmane.org>
Message-ID: <CAP7+vJJk2iAzaGdyM2=bJErvctXssz+gWT1EZ_n03VzcMxAYiw@mail.gmail.com>

A variant I've often wanted is to make str() return the "clean" type only
(e.g. 'int') while leaving repr() alone -- this would address most problems
Steven brought up.

On Wed, Feb 25, 2015 at 12:12 AM, Serhiy Storchaka <storchaka at gmail.com>
wrote:

> This idea is already casually mentioned, but sank deep into the threads of
> the discussion. Raise it up.
>
> Currently reprs of classes and functions look as:
>
> >>> int
> <class 'int'>
> >>> int.from_bytes
> <built-in method from_bytes of type object at 0x826cf60>
> >>> open
> <built-in function open>
> >>> import collections
> >>> collections.Counter
> <class 'collections.Counter'>
> >>> collections.Counter.fromkeys
> <bound method Counter.fromkeys of <class 'collections.Counter'>>
> >>> collections.namedtuple
> <function namedtuple at 0xb6fc4adc>
>
> What if change default reprs of classes and functions to just full
> qualified name __module__ + '.' + __qualname__ (or just __qualname__ if
> __module__ is builtins)? This will look more neatly. And such reprs are
> evaluable.
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/8e45d5d9/attachment.html>

From njs at pobox.com  Wed Feb 25 18:49:58 2015
From: njs at pobox.com (Nathaniel Smith)
Date: Wed, 25 Feb 2015 09:49:58 -0800
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
 <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
 <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>
 <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>
Message-ID: <CAPJVwBkO__-YL37PStB+Tz3s6+as+Pau=voM0P3MmtwPk1ocnQ@mail.gmail.com>

On Feb 25, 2015 2:58 AM, "Paul Moore" <p.f.moore at gmail.com> wrote:
>
> On 25 February 2015 at 10:35, Chris Angelico <rosuav at gmail.com> wrote:
> >> On the other hand, what if "import <3rd party module>" flags a
> >> deprecation warning because of something the module author does?
> >> Certainly I can just ignore it (as it's "only" a warning) but it would
> >> get annoying pretty fast if I couldn't suppress it...
> >
> > That might be a good excuse to lean on the module's author or submit a
> > patch. :) But yes, that does present a problem, especially since
> > warnings will, by default, show only once (if I'm understanding the
> > docs correctly) - you ignore it in the import, and it's suppressed for
> > the rest of the session.
>
> The important point here is that the idea behind this proposal is that
> enabling deprecation warnings in the REPL is viable because the
> warnings will be referring to your code, and therefore you can choose
> whether or not to deal with them there and then. But 3rd party modules
> are a case where that isn't the case.

Warnings have both a type (DeprecationWarning) and a source (line 37 of
module X), with the idea being that the source is supposed to point to the
code that's doing something questionable (which generally isn't the code
that called warnings.warn). It would be totally possible to enable display
of only those DeprecationWarnings that are attributed to stuff typed in at
the REPL.
And it sounds like a great idea to me.

(NB that if we do this we'll run into another annoying problem with
warnings and the REPL, which is that every line typed in the REPL is "line
1", so generally any given warning will only be produced once even if the
offending behavior occurs in multiple different statements. This tends to
really confuse users, since it means you can't use trial and error to
figure out what was wrong with your code: whatever tweak you try first will
always appear to fix the problem. Some more discussion of this issue here:
https://github.com/ipython/ipython/pull/6680)

-n
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/ce7f02eb/attachment-0001.html>

From storchaka at gmail.com  Wed Feb 25 20:23:14 2015
From: storchaka at gmail.com (Serhiy Storchaka)
Date: Wed, 25 Feb 2015 21:23:14 +0200
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <CAPJVwBkO__-YL37PStB+Tz3s6+as+Pau=voM0P3MmtwPk1ocnQ@mail.gmail.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info> <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
 <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>
 <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>
 <CAPJVwBkO__-YL37PStB+Tz3s6+as+Pau=voM0P3MmtwPk1ocnQ@mail.gmail.com>
Message-ID: <mcl7f3$b46$1@ger.gmane.org>

On 25.02.15 19:49, Nathaniel Smith wrote:
> (NB that if we do this we'll run into another annoying problem with
> warnings and the REPL, which is that every line typed in the REPL is
> "line 1", so generally any given warning will only be produced once even
> if the offending behavior occurs in multiple different statements. This
> tends to really confuse users, since it means you can't use trial and
> error to figure out what was wrong with your code: whatever tweak you
> try first will always appear to fix the problem. Some more discussion of
> this issue here: https://github.com/ipython/ipython/pull/6680)

Every REPL input can be considered as different source.


From njs at pobox.com  Wed Feb 25 20:58:15 2015
From: njs at pobox.com (Nathaniel Smith)
Date: Wed, 25 Feb 2015 11:58:15 -0800
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <mcl7f3$b46$1@ger.gmane.org>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
 <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
 <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>
 <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>
 <CAPJVwBkO__-YL37PStB+Tz3s6+as+Pau=voM0P3MmtwPk1ocnQ@mail.gmail.com>
 <mcl7f3$b46$1@ger.gmane.org>
Message-ID: <CAPJVwB=CYex+whnQ8bjeYh4wehbATT3BgajxB9Az236T2po5vg@mail.gmail.com>

On Feb 25, 2015 11:23 AM, "Serhiy Storchaka" <storchaka at gmail.com> wrote:
>
> On 25.02.15 19:49, Nathaniel Smith wrote:
>>
>> (NB that if we do this we'll run into another annoying problem with
>> warnings and the REPL, which is that every line typed in the REPL is
>> "line 1", so generally any given warning will only be produced once even
>> if the offending behavior occurs in multiple different statements. This
>> tends to really confuse users, since it means you can't use trial and
>> error to figure out what was wrong with your code: whatever tweak you
>> try first will always appear to fix the problem. Some more discussion of
>> this issue here: https://github.com/ipython/ipython/pull/6680)
>
>
> Every REPL input can be considered as different source.

Well, yes, and they ought to be considered different sources. My point is
just that at the moment they aren't considered different sources, so if
we're worrying about doing better warning reporting from the REPL then we
might want to think about this too, since it will limit the effectiveness
of other efforts.

-n
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/dacd9e6b/attachment.html>

From ethan at stoneleaf.us  Wed Feb 25 22:21:00 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 25 Feb 2015 13:21:00 -0800
Subject: [Python-ideas] Comparable exceptions
In-Reply-To: <CANkHFr9Cks+=vSKzFOOHcCZfpoVvDtrOW5w9vHJjWWLXLZ=+wQ@mail.gmail.com>
References: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>
 <CAP7+vJK5Kttb7M6hivYQHgHLcU9iJGiNQcy==OdqozgDibdJ5g@mail.gmail.com>
 <CANkHFr9Cks+=vSKzFOOHcCZfpoVvDtrOW5w9vHJjWWLXLZ=+wQ@mail.gmail.com>
Message-ID: <54EE3CBC.3020804@stoneleaf.us>

On 02/25/2015 02:46 AM, Ionel Cristian M?rie? wrote:

> For me it makes sense because they look like containers.

Exceptions are not containers.  There purpose is not hold args.  :)

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/05456a43/attachment.sig>

From ethan at stoneleaf.us  Wed Feb 25 22:28:11 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 25 Feb 2015 13:28:11 -0800
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <mcjuuk$2pt$1@ger.gmane.org>
References: <mcjuuk$2pt$1@ger.gmane.org>
Message-ID: <54EE3E6B.7030506@stoneleaf.us>

On 02/24/2015 11:51 PM, Serhiy Storchaka wrote:

> What you are think about turning deprecation warnings on by default in the interactive interpreter?

-1

One can use the commandline switch if one wants to.

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/4875b516/attachment.sig>

From ncoghlan at gmail.com  Wed Feb 25 22:35:17 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 26 Feb 2015 07:35:17 +1000
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <CAPJVwB=CYex+whnQ8bjeYh4wehbATT3BgajxB9Az236T2po5vg@mail.gmail.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
 <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
 <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>
 <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>
 <CAPJVwBkO__-YL37PStB+Tz3s6+as+Pau=voM0P3MmtwPk1ocnQ@mail.gmail.com>
 <mcl7f3$b46$1@ger.gmane.org>
 <CAPJVwB=CYex+whnQ8bjeYh4wehbATT3BgajxB9Az236T2po5vg@mail.gmail.com>
Message-ID: <CADiSq7cBm5d2V-1EDex1pDZGHRAg_67pi5srXyN36NOTQCacjA@mail.gmail.com>

On 26 February 2015 at 05:58, Nathaniel Smith <njs at pobox.com> wrote:
> On Feb 25, 2015 11:23 AM, "Serhiy Storchaka" <storchaka at gmail.com> wrote:
>>
>> On 25.02.15 19:49, Nathaniel Smith wrote:
>>>
>>> (NB that if we do this we'll run into another annoying problem with
>>> warnings and the REPL, which is that every line typed in the REPL is
>>> "line 1", so generally any given warning will only be produced once even
>>> if the offending behavior occurs in multiple different statements. This
>>> tends to really confuse users, since it means you can't use trial and
>>> error to figure out what was wrong with your code: whatever tweak you
>>> try first will always appear to fix the problem. Some more discussion of
>>> this issue here: https://github.com/ipython/ipython/pull/6680)
>>
>>
>> Every REPL input can be considered as different source.
>
> Well, yes, and they ought to be considered different sources. My point is
> just that at the moment they aren't considered different sources, so if
> we're worrying about doing better warning reporting from the REPL then we
> might want to think about this too, since it will limit the effectiveness of
> other efforts.

Setting the action to "always" and filtering on the main module should
do the trick:

>>> import warnings
>>> warnings.filterwarnings("always", module="__main__")
>>> warnings.warn("Deprecated", DeprecationWarning)
__main__:1: DeprecationWarning: Deprecated

Injecting that into the list of default filters when starting the
interactive interpreter also isn't too difficult, so it would be
pretty straightforward to get the interpreter to do this implicitly:

$ python3 '-Walways:::__main__'
Python 3.4.1 (default, Nov  3 2014, 14:38:10)
[GCC 4.9.1 20140930 (Red Hat 4.9.1-11)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import warnings
>>> warnings.warn("Deprecated", DeprecationWarning)
__main__:1: DeprecationWarning: Deprecated

It may also be worthwhile introducing a "-Wmain" shorthand for
"-Wdefault:::__main__", as I could see that being quite useful to
system administrators, data analysts, et al, that want to know about
deprecation warnings in their own custom scripts, but don't really
care about deprecations in support libraries.

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From ethan at stoneleaf.us  Wed Feb 25 22:39:32 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 25 Feb 2015 13:39:32 -0800
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <CADiSq7cBm5d2V-1EDex1pDZGHRAg_67pi5srXyN36NOTQCacjA@mail.gmail.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info> <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
 <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>
 <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>
 <CAPJVwBkO__-YL37PStB+Tz3s6+as+Pau=voM0P3MmtwPk1ocnQ@mail.gmail.com>
 <mcl7f3$b46$1@ger.gmane.org>
 <CAPJVwB=CYex+whnQ8bjeYh4wehbATT3BgajxB9Az236T2po5vg@mail.gmail.com>
 <CADiSq7cBm5d2V-1EDex1pDZGHRAg_67pi5srXyN36NOTQCacjA@mail.gmail.com>
Message-ID: <54EE4114.5090505@stoneleaf.us>

On 02/25/2015 01:35 PM, Nick Coghlan wrote:

> It may also be worthwhile introducing a "-Wmain" shorthand for
> "-Wdefault:::__main__",

+1

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/a957015b/attachment-0001.sig>

From tjreedy at udel.edu  Wed Feb 25 22:45:41 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 25 Feb 2015 16:45:41 -0500
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <85wq365qqx.fsf@benfinney.id.au>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
Message-ID: <mclfqc$pkd$1@ger.gmane.org>

On 2/25/2015 3:39 AM, Ben Finney wrote:
> Serhiy Storchaka <storchaka at gmail.com>
> writes:
>
>> What you are think about turning deprecation warnings on by default in
>> the interactive interpreter?

I don't really get the point.  It seems to me that the idea is to have 
warnings optionally on during development, always off during production. 
  Most development, especially of 'permanent' code, does not take place 
at the interactive prompt.

> In its favour: Exposing problems early, when they can easily be fixed,
> is good.

I think the idea of turning warnings on and off would better work as an 
Idle option for the user process that executes user code.  The option 
would take effect the next time the user process is restarted, which is 
every time code is run from the editor, or when selected by the user.

> To its detriment: Making the interactive interpreter behave differently
> by default from the non-interactive interpreter should be resisted; code
> which behaves a certain way by default in one should behave the same way
> in the other, without extremely compelling justification.

An Idle option would apply to both code run from the editor and code 
entered at the Shell prompt.  Other IDEs could do the same.

I presume Warnings are issued on stderr.  If so, Idle would give them 
the stderr color, making them less confused with the differently colored 
program stdout output.

-- 
Terry Jan Reedy


From solipsis at pitrou.net  Wed Feb 25 23:03:41 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 25 Feb 2015 23:03:41 +0100
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
 <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
 <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>
 <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>
 <CAPJVwBkO__-YL37PStB+Tz3s6+as+Pau=voM0P3MmtwPk1ocnQ@mail.gmail.com>
 <mcl7f3$b46$1@ger.gmane.org>
 <CAPJVwB=CYex+whnQ8bjeYh4wehbATT3BgajxB9Az236T2po5vg@mail.gmail.com>
 <CADiSq7cBm5d2V-1EDex1pDZGHRAg_67pi5srXyN36NOTQCacjA@mail.gmail.com>
Message-ID: <20150225230341.507ccc43@fsol>

On Thu, 26 Feb 2015 07:35:17 +1000
Nick Coghlan <ncoghlan at gmail.com> wrote:
> 
> It may also be worthwhile introducing a "-Wmain" shorthand for
> "-Wdefault:::__main__", as I could see that being quite useful to
> system administrators, data analysts, et al, that want to know about
> deprecation warnings in their own custom scripts, but don't really
> care about deprecations in support libraries.

-1. This is making the "main script" special while code factored out in
another module won't benefit.
Furthermore there hasn't been any demand for this.

Regards

Antoine.



From njs at pobox.com  Wed Feb 25 23:04:43 2015
From: njs at pobox.com (Nathaniel Smith)
Date: Wed, 25 Feb 2015 14:04:43 -0800
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <mclfqc$pkd$1@ger.gmane.org>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <mclfqc$pkd$1@ger.gmane.org>
Message-ID: <CAPJVwBmcdnmN2VR_7aQktWXq=S-gTcaQyj+ZJyrMaB6WgUVf+w@mail.gmail.com>

On Wed, Feb 25, 2015 at 1:45 PM, Terry Reedy <tjreedy at udel.edu> wrote:
> On 2/25/2015 3:39 AM, Ben Finney wrote:
>>
>> Serhiy Storchaka <storchaka at gmail.com>
>> writes:
>>
>>> What you are think about turning deprecation warnings on by default in
>>> the interactive interpreter?
>
> I don't really get the point.  It seems to me that the idea is to have
> warnings optionally on during development, always off during production.
> Most development, especially of 'permanent' code, does not take place at the
> interactive prompt.

This reasoning seems backwards to me. It doesn't matter whether there
is also code being developed non-interactively; the question is, for
that code which *is* developed interactively, should warnings be
visible by default.

BTW, the people around me seem to be developing possibly a majority of
their code these days by starting out iteratively refining stuff using
the ipython notebook as a REPL.

-- 
Nathaniel J. Smith -- http://vorpus.org

From luciano at ramalho.org  Wed Feb 25 23:15:12 2015
From: luciano at ramalho.org (Luciano Ramalho)
Date: Wed, 25 Feb 2015 19:15:12 -0300
Subject: [Python-ideas] Comparable exceptions
In-Reply-To: <54EE3CBC.3020804@stoneleaf.us>
References: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>
 <CAP7+vJK5Kttb7M6hivYQHgHLcU9iJGiNQcy==OdqozgDibdJ5g@mail.gmail.com>
 <CANkHFr9Cks+=vSKzFOOHcCZfpoVvDtrOW5w9vHJjWWLXLZ=+wQ@mail.gmail.com>
 <54EE3CBC.3020804@stoneleaf.us>
Message-ID: <CALxg4FUy9s3cRaFLD84zHU_-JEdf4EGtCuTgPqZ7it2cu4m8VA@mail.gmail.com>

On Wed, Feb 25, 2015 at 6:21 PM, Ethan Furman <ethan at stoneleaf.us> wrote:
> Exceptions are not containers.  There purpose is not hold args.  :)

Except when it's StopIteration and you're using it to smuggle the
return value of a generator :-)

--
Luciano


>
> --
> ~Ethan~
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/



-- 
Luciano Ramalho
Twitter: @ramalhoorg

Professor em: http://python.pro.br
Twitter: @pythonprobr

From ncoghlan at gmail.com  Wed Feb 25 23:34:31 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 26 Feb 2015 08:34:31 +1000
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <20150225230341.507ccc43@fsol>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
 <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
 <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>
 <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>
 <CAPJVwBkO__-YL37PStB+Tz3s6+as+Pau=voM0P3MmtwPk1ocnQ@mail.gmail.com>
 <mcl7f3$b46$1@ger.gmane.org>
 <CAPJVwB=CYex+whnQ8bjeYh4wehbATT3BgajxB9Az236T2po5vg@mail.gmail.com>
 <CADiSq7cBm5d2V-1EDex1pDZGHRAg_67pi5srXyN36NOTQCacjA@mail.gmail.com>
 <20150225230341.507ccc43@fsol>
Message-ID: <CADiSq7dKY2gz2rqzCarWqy=8s=y0zRN5XRqoL0Zdm3j=9FM-2g@mail.gmail.com>

On 26 February 2015 at 08:03, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Thu, 26 Feb 2015 07:35:17 +1000
> Nick Coghlan <ncoghlan at gmail.com> wrote:
>>
>> It may also be worthwhile introducing a "-Wmain" shorthand for
>> "-Wdefault:::__main__", as I could see that being quite useful to
>> system administrators, data analysts, et al, that want to know about
>> deprecation warnings in their own custom scripts, but don't really
>> care about deprecations in support libraries.
>
> -1. This is making the "main script" special while code factored out in
> another module won't benefit.
> Furthermore there hasn't been any demand for this.

Yes there has, at the Linux distro level. I just haven't escalated it
upstream because I didn't think there was anything useful we could do
about it given the previous decision to default to silencing
deprecation warnings by default outside testing frameworks.

The key problem with the status quo from my perspective is that
sysadmins currently don't get any warning that their scripts might
have problems when Fedora next upgrades to a new version of Python,
because they're not getting the deprecation warnings by default.

Given a -Wmain option upstream, I'd likely try to make the case for
that as the default behaviour of the distro system Python, but without
"enable all warnings for __main__" at least being a readily available
behaviour in the reference interpreter, it's arguably too much of a
divergence (since we'd be defining the new behaviour ourselves, rather
than selecting a different upstream behaviour as the default)

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From victor.stinner at gmail.com  Wed Feb 25 23:39:46 2015
From: victor.stinner at gmail.com (Victor Stinner)
Date: Wed, 25 Feb 2015 23:39:46 +0100
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <mcjuuk$2pt$1@ger.gmane.org>
References: <mcjuuk$2pt$1@ger.gmane.org>
Message-ID: <CAMpsgwYc0_xWW3k_55FG4=v4OdkRWqwcMU-B1PEz0AHDjK3aPQ@mail.gmail.com>

Hi,

Python has more and more options to emit warnings or enable more checks.
Maybe we need one global "strict" option to simply enable all checks and
warnings at once?

For example, there are deprecation warnings, but also resource warnings.
Both should be enabled in the strict mode. Another example is the -bb
command line option (bytes watning). It should also be enabled. More
example: -tt (warnings on indentation).

It can be a command line option, an environment variable and a function to
enable it.

Perl has "use strict;".

For the interactive interpreter, the strict code may be enabled by default,
but only if it's easy to disable it.

Victor

Le mercredi 25 f?vrier 2015, Serhiy Storchaka <storchaka at gmail.com> a
?crit :

> What you are think about turning deprecation warnings on by default in the
> interactive interpreter?
>
> Deprecation warnings are silent by default because they just annoys end
> user that uses old Python program with new Python. End user usually is not
> a programmer and can't do something with warnings. But in the interactive
> interpreter the user is the author of executed code and can avoid to use
> deprecated functions if warned.
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/14dffc17/attachment-0001.html>

From solipsis at pitrou.net  Wed Feb 25 23:44:39 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 25 Feb 2015 23:44:39 +0100
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
 <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
 <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>
 <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>
 <CAPJVwBkO__-YL37PStB+Tz3s6+as+Pau=voM0P3MmtwPk1ocnQ@mail.gmail.com>
 <mcl7f3$b46$1@ger.gmane.org>
 <CAPJVwB=CYex+whnQ8bjeYh4wehbATT3BgajxB9Az236T2po5vg@mail.gmail.com>
 <CADiSq7cBm5d2V-1EDex1pDZGHRAg_67pi5srXyN36NOTQCacjA@mail.gmail.com>
 <20150225230341.507ccc43@fsol>
 <CADiSq7dKY2gz2rqzCarWqy=8s=y0zRN5XRqoL0Zdm3j=9FM-2g@mail.gmail.com>
Message-ID: <20150225234439.1fc60169@fsol>

On Thu, 26 Feb 2015 08:34:31 +1000
Nick Coghlan <ncoghlan at gmail.com> wrote:
> 
> The key problem with the status quo from my perspective is that
> sysadmins currently don't get any warning that their scripts might
> have problems when Fedora next upgrades to a new version of Python,
> because they're not getting the deprecation warnings by default.

The focus on sysadmins sounds a bit gratuitous here. Furthermore I
don't see why sysadmins would only use "main scripts" and not factor
out their code. Besides, we don't want to add an option which would
*discourage* factoring out code.

Another reason why this may be counter-productive: if you run your
script through a stub, then the warnings disappear again.

And this is adding a new variant to the '-W' option, which is
already complicated to learn.

The real issue is that we silenced deprecation warnings at all. There's
nothing specific about "main scripts": every module suffers.

Regards

Antoine.



From rosuav at gmail.com  Wed Feb 25 23:49:38 2015
From: rosuav at gmail.com (Chris Angelico)
Date: Thu, 26 Feb 2015 09:49:38 +1100
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <20150225234439.1fc60169@fsol>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
 <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
 <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>
 <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>
 <CAPJVwBkO__-YL37PStB+Tz3s6+as+Pau=voM0P3MmtwPk1ocnQ@mail.gmail.com>
 <mcl7f3$b46$1@ger.gmane.org>
 <CAPJVwB=CYex+whnQ8bjeYh4wehbATT3BgajxB9Az236T2po5vg@mail.gmail.com>
 <CADiSq7cBm5d2V-1EDex1pDZGHRAg_67pi5srXyN36NOTQCacjA@mail.gmail.com>
 <20150225230341.507ccc43@fsol>
 <CADiSq7dKY2gz2rqzCarWqy=8s=y0zRN5XRqoL0Zdm3j=9FM-2g@mail.gmail.com>
 <20150225234439.1fc60169@fsol>
Message-ID: <CAPTjJmr6oMiy9niQUR85HeURc_kROtf-5RGKnsKP7i4BL7DUOw@mail.gmail.com>

On Thu, Feb 26, 2015 at 9:44 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> The focus on sysadmins sounds a bit gratuitous here. Furthermore I
> don't see why sysadmins would only use "main scripts" and not factor
> out their code. Besides, we don't want to add an option which would
> *discourage* factoring out code.
>
> Another reason why this may be counter-productive: if you run your
> script through a stub, then the warnings disappear again.

If drawing the line at __main__ is a problem, is it possible to enable
warnings for "everything that came from the first entry in sys.path"?
Or, if it helps, disable warnings for a specific set of paths? That
way, you could factor out code into a local module (imported from the
main module's directory) and still have warnings, but not have them
for the stuff that you consider to be library modules.

ChrisA

From ethan at stoneleaf.us  Wed Feb 25 23:49:42 2015
From: ethan at stoneleaf.us (Ethan Furman)
Date: Wed, 25 Feb 2015 14:49:42 -0800
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <20150225234439.1fc60169@fsol>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info> <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
 <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>
 <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>
 <CAPJVwBkO__-YL37PStB+Tz3s6+as+Pau=voM0P3MmtwPk1ocnQ@mail.gmail.com>
 <mcl7f3$b46$1@ger.gmane.org>
 <CAPJVwB=CYex+whnQ8bjeYh4wehbATT3BgajxB9Az236T2po5vg@mail.gmail.com>
 <CADiSq7cBm5d2V-1EDex1pDZGHRAg_67pi5srXyN36NOTQCacjA@mail.gmail.com>
 <20150225230341.507ccc43@fsol>
 <CADiSq7dKY2gz2rqzCarWqy=8s=y0zRN5XRqoL0Zdm3j=9FM-2g@mail.gmail.com>
 <20150225234439.1fc60169@fsol>
Message-ID: <54EE5186.30909@stoneleaf.us>

On 02/25/2015 02:44 PM, Antoine Pitrou wrote:

> There's nothing specific about "main scripts": every module suffers.

There may be nothing special about "main scripts", but there is something special about __main__: it's where
hand-entered code runs when using the REPL.

--
~Ethan~

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/45e7b817/attachment.sig>

From solipsis at pitrou.net  Wed Feb 25 23:57:25 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 25 Feb 2015 23:57:25 +0100
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info>
 <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
 <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>
 <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>
 <CAPJVwBkO__-YL37PStB+Tz3s6+as+Pau=voM0P3MmtwPk1ocnQ@mail.gmail.com>
 <mcl7f3$b46$1@ger.gmane.org>
 <CAPJVwB=CYex+whnQ8bjeYh4wehbATT3BgajxB9Az236T2po5vg@mail.gmail.com>
 <CADiSq7cBm5d2V-1EDex1pDZGHRAg_67pi5srXyN36NOTQCacjA@mail.gmail.com>
 <20150225230341.507ccc43@fsol>
 <CADiSq7dKY2gz2rqzCarWqy=8s=y0zRN5XRqoL0Zdm3j=9FM-2g@mail.gmail.com>
 <20150225234439.1fc60169@fsol> <54EE5186.30909@stoneleaf.us>
Message-ID: <20150225235725.6c19e667@fsol>

On Wed, 25 Feb 2015 14:49:42 -0800
Ethan Furman <ethan at stoneleaf.us> wrote:

> On 02/25/2015 02:44 PM, Antoine Pitrou wrote:
> 
> > There's nothing specific about "main scripts": every module suffers.
> 
> There may be nothing special about "main scripts", but there is something special about __main__: it's where
> hand-entered code runs when using the REPL.

Then let's make the REPL enable those warnings rather than entertain
new command-line arguments.

Regards

Antoine.



From random832 at fastmail.us  Thu Feb 26 00:27:24 2015
From: random832 at fastmail.us (random832 at fastmail.us)
Date: Wed, 25 Feb 2015 18:27:24 -0500
Subject: [Python-ideas] Show deprecation warnings in the interactive
 interpreter
In-Reply-To: <mclfqc$pkd$1@ger.gmane.org>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <mclfqc$pkd$1@ger.gmane.org>
Message-ID: <1424906844.272258.232513577.6EA7BD96@webmail.messagingengine.com>

On Wed, Feb 25, 2015, at 16:45, Terry Reedy wrote:
> On 2/25/2015 3:39 AM, Ben Finney wrote:
> > Serhiy Storchaka <storchaka at gmail.com>
> > writes:
> >
> >> What you are think about turning deprecation warnings on by default in
> >> the interactive interpreter?
> 
> I don't really get the point.  It seems to me that the idea is to have 
> warnings optionally on during development, always off during production. 
>   Most development, especially of 'permanent' code, does not take place 
> at the interactive prompt.

I think the idea is that people experiment at the interactive prompt and
warnings will help them learn good habits.

From ezio.melotti at gmail.com  Thu Feb 26 00:56:33 2015
From: ezio.melotti at gmail.com (Ezio Melotti)
Date: Thu, 26 Feb 2015 01:56:33 +0200
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <mcjuuk$2pt$1@ger.gmane.org>
References: <mcjuuk$2pt$1@ger.gmane.org>
Message-ID: <CACBhJdGWwX8aaNdtaiacSiDMH+eSdksnf6JO084oYfoaqqnqeA@mail.gmail.com>

Hi,

On Wed, Feb 25, 2015 at 9:51 AM, Serhiy Storchaka <storchaka at gmail.com> wrote:
> What you are think about turning deprecation warnings on by default in the
> interactive interpreter?
>

+1

If I'm using the interpreter to try some piece of code I'm not
familiar with, I would rather get a warning immediately and proceed to
find the right solution, rather than spending time figuring out how
some deprecated function works, use it in my application, write the
tests for it, and only then find out that I've been wasting my time
with something that was deprecated (or worse, find it out once I
update Python and the function is no longer there).

Using flags (both existing and new proposed ones) to enable warnings
has three issues:
  1) you have to know that deprecation warnings are silenced by
default and that there is an option to enable them;
  2) you have to remember the flag/syntax to enable them;
  3) you have to do it every time you start the interpreter;
Realistically I don't see people starting using "python3 --some-flag"
regularly just to see all the warnings.

Regarding deprecation warnings raised while importing 3rd party code:
  1) most of the time there's very little code executed at import time
(mostly function/classes definitions), so the chance of getting
warnings should be quite low;
  2) if there are warnings (e.g. because the module I'm importing
imports some deprecated module), I can either report them upstream or
simply dismiss them.  Seeing that there are warnings might also make
me reconsider the use of the module (since it might not work on more
recent Pythons, or might even have security implications).

Best Regards,
Ezio Melotti


> Deprecation warnings are silent by default because they just annoys end user
> that uses old Python program with new Python. End user usually is not a
> programmer and can't do something with warnings. But in the interactive
> interpreter the user is the author of executed code and can avoid to use
> deprecated functions if warned.
>

From tjreedy at udel.edu  Thu Feb 26 01:20:10 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 25 Feb 2015 19:20:10 -0500
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <54EE3E6B.7030506@stoneleaf.us>
References: <mcjuuk$2pt$1@ger.gmane.org> <54EE3E6B.7030506@stoneleaf.us>
Message-ID: <mclos2$3tp$1@ger.gmane.org>

On 2/25/2015 4:28 PM, Ethan Furman wrote:
> On 02/24/2015 11:51 PM, Serhiy Storchaka wrote:
>
>> What you are think about turning deprecation warnings on by default
 >> in the interactive interpreter?
>
> -1
>
> One can use the commandline switch if one wants to.

A change limited to the interactive interpreter would be meaningless for 
everyone who works at a simulated prompt in a gui program running on 
non-interactive python, or running user code under supervision in 
non-interactive python.  This includes both modes of Idle (default and 
-n).  I presume this includes nearly any other alternative to the 
console interpreter.

On Windows, the actual Command Prompt interactive interpreter is a 
wretched environment.  I only use it to start Idle on repository builds 
and to check whether Idle does the same thing as the console 
interpreter.  I suspect that many users on Windows never see the console 
interpreter, just as they many never see a command prompt command line.

-- 
Terry Jan Reedy


From guido at python.org  Thu Feb 26 01:45:26 2015
From: guido at python.org (Guido van Rossum)
Date: Wed, 25 Feb 2015 16:45:26 -0800
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <mclos2$3tp$1@ger.gmane.org>
References: <mcjuuk$2pt$1@ger.gmane.org> <54EE3E6B.7030506@stoneleaf.us>
 <mclos2$3tp$1@ger.gmane.org>
Message-ID: <CAP7+vJKod0meMT6h3L2zW34etR5J15eQLS8AmzgPUVzcK4C7fA@mail.gmail.com>

Well, you can turn warnings or off using a command line flag or by making
calls into the warnings module, so everyone who emulates a REPL can copy
this behavior. I'm not worried about that.

On Wed, Feb 25, 2015 at 4:20 PM, Terry Reedy <tjreedy at udel.edu> wrote:

> On 2/25/2015 4:28 PM, Ethan Furman wrote:
>
>> On 02/24/2015 11:51 PM, Serhiy Storchaka wrote:
>>
>>  What you are think about turning deprecation warnings on by default
>>>
>> >> in the interactive interpreter?
>
>>
>> -1
>>
>> One can use the commandline switch if one wants to.
>>
>
> A change limited to the interactive interpreter would be meaningless for
> everyone who works at a simulated prompt in a gui program running on
> non-interactive python, or running user code under supervision in
> non-interactive python.  This includes both modes of Idle (default and
> -n).  I presume this includes nearly any other alternative to the console
> interpreter.
>
> On Windows, the actual Command Prompt interactive interpreter is a
> wretched environment.  I only use it to start Idle on repository builds and
> to check whether Idle does the same thing as the console interpreter.  I
> suspect that many users on Windows never see the console interpreter, just
> as they many never see a command prompt command line.
>
> --
> Terry Jan Reedy
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>



-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/db9f6fa1/attachment.html>

From tjreedy at udel.edu  Thu Feb 26 01:47:12 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 25 Feb 2015 19:47:12 -0500
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <1424906844.272258.232513577.6EA7BD96@webmail.messagingengine.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <mclfqc$pkd$1@ger.gmane.org>
 <1424906844.272258.232513577.6EA7BD96@webmail.messagingengine.com>
Message-ID: <mclqen$vej$1@ger.gmane.org>

On 2/25/2015 6:27 PM, random832 at fastmail.us 
wrote:
> On Wed, Feb 25, 2015, at 16:45, Terry Reedy wrote:
>> On 2/25/2015 3:39 AM, Ben Finney wrote:
>>> Serhiy Storchaka <storchaka at gmail.com>
>>> writes:
>>>
>>>> What you are think about turning deprecation warnings on by default in
>>>> the interactive interpreter?
>>
>> I don't really get the point.  It seems to me that the idea is to have
>> warnings optionally on during development, always off during production.
>>    Most development, especially of 'permanent' code, does not take place
>> at the interactive prompt.
>
> I think the idea is that people experiment at the interactive prompt and
> warnings will help them learn good habits.

People often experiment with files in editors when writing statements 
longer than a line or two or when writing multiple statements.  Leaving 
that aside, people also experiment at *simulated* interactive prompts in 
guis, such as Idle, that run on Python in normal batch mode.  In either 
case, changing interactive mode python will have no effect and no benefit.

-- 
Terry Jan Reedy


From tjreedy at udel.edu  Thu Feb 26 02:23:54 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 25 Feb 2015 20:23:54 -0500
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <CAPJVwBmcdnmN2VR_7aQktWXq=S-gTcaQyj+ZJyrMaB6WgUVf+w@mail.gmail.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <mclfqc$pkd$1@ger.gmane.org>
 <CAPJVwBmcdnmN2VR_7aQktWXq=S-gTcaQyj+ZJyrMaB6WgUVf+w@mail.gmail.com>
Message-ID: <mclsjh$1al$1@ger.gmane.org>

On 2/25/2015 5:04 PM, Nathaniel Smith wrote:
> On Wed, Feb 25, 2015 at 1:45 PM, Terry Reedy <tjreedy at udel.edu> wrote:
>> On 2/25/2015 3:39 AM, Ben Finney wrote:
>>>
>>> Serhiy Storchaka <storchaka at gmail.com>
>>> writes:
>>>
>>>> What you are think about turning deprecation warnings on by default in
>>>> the interactive interpreter?
>>
>> I don't really get the point.  It seems to me that the idea is to have
>> warnings optionally on during development, always off during production.
>> Most development, especially of 'permanent' code, does not take place at the
>> interactive prompt.
>
> This reasoning seems backwards to me. It doesn't matter whether there
> is also code being developed non-interactively; the question is, for
> that code which *is* developed interactively, should warnings be
> visible by default.

I develop interactively by editing a bit, running, ......, with 
occasional use of the Idle prompt after running.

> BTW, the people around me seem to be developing possibly a majority of
> their code these days by starting out iteratively refining stuff using
> the ipython notebook as a REPL.

You comment actually supports the more important part of my post, which 
you snipped.  An ipython notebook, like Idle, is not the interactive 
interpreter.  It would have to be changed independently, as would Idle. 
  Neither has to wait for an 'official' change.  Idle would not be 
changed by a change only to the official interactive interpreter.  I 
suspect that the same would also be true of ipython, but I am not 
familiar with its implementation as to how it actually runs code entered 
by users.

-- 
Terry Jan Reedy


From abarnert at yahoo.com  Thu Feb 26 02:41:36 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Wed, 25 Feb 2015 17:41:36 -0800
Subject: [Python-ideas] Reprs of classes and functions
In-Reply-To: <CAP7+vJJk2iAzaGdyM2=bJErvctXssz+gWT1EZ_n03VzcMxAYiw@mail.gmail.com>
References: <mck04k$kvd$1@ger.gmane.org>
 <CAP7+vJJk2iAzaGdyM2=bJErvctXssz+gWT1EZ_n03VzcMxAYiw@mail.gmail.com>
Message-ID: <A483ECDB-0A24-491A-9C3F-0EA047FBFAC8@yahoo.com>

On Feb 25, 2015, at 8:21, Guido van Rossum <guido at python.org> wrote:

> A variant I've often wanted is to make str() return the "clean" type only (e.g. 'int') while leaving repr() alone -- this would address most problems Steven brought up.

On further thought, I'm not sure backward compatibility can be dismissed. Surely code shouldn't care about the str of a callable, right? But yeah, sometimes it might care, if it's, say, a reflective framework for bridging or debugging or similar.

I once wrote a library that exposed Python objects over AppleEvents. I needed to know all the bound methods of the object. For pure-Python objects, this is easy, but for builtin/extension objects, the bound methods constructed from both slot-wrapper and method-descriptor are of type method-wrapper. (The bug turned up when an obscure corner of our code that tried lxml, cElementTree, then ElementTree was run on a 2.x system without lxml installed, something we'd left out of the test matrix for that module.)

When you print out the method, it's obvious what type it is. But how do you deal with it programmatically? Despite what the docs say, method-descriptor does not return true for either isbuiltin or isroutine. In fact, nothing in inspect is much help. And the type isn't in types. Printing out the method, it's dead obvious what it is.

As it turned out, callable(x) and (hasattr(x, 'im_self') or hasattr(x, '__self__')) was good enough for my particular use case, at least in all Python interpreters/versions we supported (it even works in PyPy with CPyExt extension types and an early version of NumPyPy). But what would a good general solution be?

I think dynamically constructing the type from int().__add__ and then verifying that all extension types in all interpreters and versions I care about pass isinstance(that) unless they look like pure-Python bound methods might be the best way. And even if I did resort to using the string representation, I'd probably use the repr rather than the str, and I'd probably look at type(x) rather than x itself. But still, I can imagine someone deciding "whatever I do is going to be hacky, because there's no non-hacky way to get this information, so I might as well go with the simplest hack that's worked on every CPython and PyPy version from 2.2 to today" and use "... or str(x).startswith('<method-wrapper')".

> 
> On Wed, Feb 25, 2015 at 12:12 AM, Serhiy Storchaka <storchaka at gmail.com> wrote:
>> This idea is already casually mentioned, but sank deep into the threads of the discussion. Raise it up.
>> 
>> Currently reprs of classes and functions look as:
>> 
>> >>> int
>> <class 'int'>
>> >>> int.from_bytes
>> <built-in method from_bytes of type object at 0x826cf60>
>> >>> open
>> <built-in function open>
>> >>> import collections
>> >>> collections.Counter
>> <class 'collections.Counter'>
>> >>> collections.Counter.fromkeys
>> <bound method Counter.fromkeys of <class 'collections.Counter'>>
>> >>> collections.namedtuple
>> <function namedtuple at 0xb6fc4adc>
>> 
>> What if change default reprs of classes and functions to just full qualified name __module__ + '.' + __qualname__ (or just __qualname__ if __module__ is builtins)? This will look more neatly. And such reprs are evaluable.
>> 
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
> 
> 
> 
> -- 
> --Guido van Rossum (python.org/~guido)
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/fb43896d/attachment-0001.html>

From tjreedy at udel.edu  Thu Feb 26 02:58:24 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 25 Feb 2015 20:58:24 -0500
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <CADiSq7cBm5d2V-1EDex1pDZGHRAg_67pi5srXyN36NOTQCacjA@mail.gmail.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <20150225095559.GP7655@ando.pearwood.info> <85oaoi5mym.fsf@benfinney.id.au>
 <CAPTjJmrhc+eMfq=gUgd8WUoNYbDYBtkPT3C758cUb3+AyC8kjQ@mail.gmail.com>
 <CACac1F-jjynPpr8F8VykTsPgEn8voouj+woqMptD-J2z=sP3dg@mail.gmail.com>
 <CAPTjJmqXbW2bP_LgjD-06P9wu4ENUc_MmCJepURoCc4Spybhrg@mail.gmail.com>
 <CACac1F8PCkLLmzCrqA9g0jY3KKhX0RTJ+uYwnAJabgr4mAMu+w@mail.gmail.com>
 <CAPJVwBkO__-YL37PStB+Tz3s6+as+Pau=voM0P3MmtwPk1ocnQ@mail.gmail.com>
 <mcl7f3$b46$1@ger.gmane.org>
 <CAPJVwB=CYex+whnQ8bjeYh4wehbATT3BgajxB9Az236T2po5vg@mail.gmail.com>
 <CADiSq7cBm5d2V-1EDex1pDZGHRAg_67pi5srXyN36NOTQCacjA@mail.gmail.com>
Message-ID: <mcluk7$17t$1@ger.gmane.org>

On 2/25/2015 4:35 PM, Nick Coghlan wrote:

> Setting the action to "always" and filtering on the main module should
> do the trick:
>
> >>> import warnings
> >>> warnings.filterwarnings("always", module="__main__")

Changing interactive-mode python is irrelevant to gui environments 
running on batch-mode python, except for adding another requirement for 
simulating interactive python and its command prompt.  Idle starts 
idlelib/PyShell.py in one process (the Idle process) and (in default 
mode) idlelib/run.py in a second process (the User process) and makes a 
bidirectional connection.  The Idle process compiles user code with 
compile(codestring, ...) and sends the code object to the User process, 
which executes it with exec(usercode, namespace).

Both processes already import warnings.  If I add the line above to 
run.py, I get

 >>> warnings.warn("Deprecated", DeprecationWarning)

Warning (from warnings module):
   File "__main__", line 1
DeprecationWarning: Deprecated

(on stderr) instead of nothing.  I presume this should be sufficient for 
runtime warnings, but are there any raised during compilation (like 
SyntaxWarnings, or Py3kWarnings in 2.7)?  I see nothing in the compile() 
doc about turning warnings on or off.

-- 
Terry Jan Reedy


From abarnert at yahoo.com  Thu Feb 26 03:03:02 2015
From: abarnert at yahoo.com (Andrew Barnert)
Date: Wed, 25 Feb 2015 18:03:02 -0800
Subject: [Python-ideas] Reprs of classes and functions
In-Reply-To: <CAP7+vJJk2iAzaGdyM2=bJErvctXssz+gWT1EZ_n03VzcMxAYiw@mail.gmail.com>
References: <mck04k$kvd$1@ger.gmane.org>
 <CAP7+vJJk2iAzaGdyM2=bJErvctXssz+gWT1EZ_n03VzcMxAYiw@mail.gmail.com>
Message-ID: <9BCC50DE-81D7-469B-B3AA-5E9EEC99F531@yahoo.com>

My first response seems to have bounced, so:

Besides backward compatibility, there's still a minor downside to that: the current behavior helps catch some novice errors, like storing a function instead of calling the function and storing its return value. Printing `<function spam.cook`> instead of the expected `spam, spam, eggs, beans, spam` is a lot more obvious than seeing `cook` alone. But as I said, it's pretty minor. It's still not _that_ hard to understand from `cook`, especially if you ask someone for help, and this is the kind of mistake that novices only make once (and usually ask for help with). The benefit to everyone surely outweighs the small cost to novices.

Sent from a random iPhone

On Feb 25, 2015, at 8:21, Guido van Rossum <guido at python.org> wrote:

> A variant I've often wanted is to make str() return the "clean" type only (e.g. 'int') while leaving repr() alone -- this would address most problems Steven brought up.
> 
> On Wed, Feb 25, 2015 at 12:12 AM, Serhiy Storchaka <storchaka at gmail.com> wrote:
>> This idea is already casually mentioned, but sank deep into the threads of the discussion. Raise it up.
>> 
>> Currently reprs of classes and functions look as:
>> 
>> >>> int
>> <class 'int'>
>> >>> int.from_bytes
>> <built-in method from_bytes of type object at 0x826cf60>
>> >>> open
>> <built-in function open>
>> >>> import collections
>> >>> collections.Counter
>> <class 'collections.Counter'>
>> >>> collections.Counter.fromkeys
>> <bound method Counter.fromkeys of <class 'collections.Counter'>>
>> >>> collections.namedtuple
>> <function namedtuple at 0xb6fc4adc>
>> 
>> What if change default reprs of classes and functions to just full qualified name __module__ + '.' + __qualname__ (or just __qualname__ if __module__ is builtins)? This will look more neatly. And such reprs are evaluable.
>> 
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
> 
> 
> 
> -- 
> --Guido van Rossum (python.org/~guido)
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/3fc9fccd/attachment.html>

From tjreedy at udel.edu  Thu Feb 26 03:38:04 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 25 Feb 2015 21:38:04 -0500
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <CAP7+vJKod0meMT6h3L2zW34etR5J15eQLS8AmzgPUVzcK4C7fA@mail.gmail.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <54EE3E6B.7030506@stoneleaf.us>
 <mclos2$3tp$1@ger.gmane.org>
 <CAP7+vJKod0meMT6h3L2zW34etR5J15eQLS8AmzgPUVzcK4C7fA@mail.gmail.com>
Message-ID: <mcm0uj$2rm$1@ger.gmane.org>

On 2/25/2015 7:45 PM, Guido van Rossum wrote:
> Well, you can turn warnings or off using a command line flag or by
> making calls into the warnings module, so everyone who emulates a REPL
> can copy this behavior. I'm not worried about that.

See my later response to Nick.  After adding the one line to my 
installed 3.4.3 idlelib/run.py, the runtime DeprecationWarning raised by
   from ftplib import Netrc; Netrc()
is displayed, whether the above is entered in the Shell or run from the 
editor. I would not mind adding the one line to run.py (for all current 
versions) and being done with it.  Idle is mostly intended for development.

Making the warning display optional would be far, far harder.

As I said previously, I do not know for sure if there are compile-time 
DeprecationWarnings to try to deal with.

-- 
Terry Jan Reedy


From tjreedy at udel.edu  Thu Feb 26 04:21:40 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 25 Feb 2015 22:21:40 -0500
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <mcm0uj$2rm$1@ger.gmane.org>
References: <mcjuuk$2pt$1@ger.gmane.org> <54EE3E6B.7030506@stoneleaf.us>
 <mclos2$3tp$1@ger.gmane.org>
 <CAP7+vJKod0meMT6h3L2zW34etR5J15eQLS8AmzgPUVzcK4C7fA@mail.gmail.com>
 <mcm0uj$2rm$1@ger.gmane.org>
Message-ID: <mcm3gb$b32$1@ger.gmane.org>

On 2/25/2015 9:38 PM, Terry Reedy wrote:
> On 2/25/2015 7:45 PM, Guido van Rossum wrote:
>> Well, you can turn warnings or off using a command line flag or by
>> making calls into the warnings module, so everyone who emulates a REPL
>> can copy this behavior. I'm not worried about that.

I just discovered that Idle uses sys.warnoptions when starting a 
subprocess, so users can get all warnings now if they want, or pass 
Nick's suggested '-Wdefault:::__main__' for main code only.

> See my later response to Nick.  After adding the one line to my
> installed 3.4.3 idlelib/run.py, the runtime DeprecationWarning raised by
>    from ftplib import Netrc; Netrc()
> is displayed, whether the above is entered in the Shell or run from the
> editor. I would not mind adding the one line to run.py (for all current
> versions) and being done with it.  Idle is mostly intended for development.
>
> Making the warning display optional would be far, far harder.

Make that just 'somewhat harder'.  Manipulating sys.warnoptions would 
affect the next restart.  It would still be perhaps 30 new lines of 
code, and some decisions on details.

-- 
Terry Jan Reedy


From guido at python.org  Thu Feb 26 04:28:00 2015
From: guido at python.org (Guido van Rossum)
Date: Wed, 25 Feb 2015 19:28:00 -0800
Subject: [Python-ideas] Reprs of classes and functions
In-Reply-To: <A483ECDB-0A24-491A-9C3F-0EA047FBFAC8@yahoo.com>
References: <mck04k$kvd$1@ger.gmane.org>
 <CAP7+vJJk2iAzaGdyM2=bJErvctXssz+gWT1EZ_n03VzcMxAYiw@mail.gmail.com>
 <A483ECDB-0A24-491A-9C3F-0EA047FBFAC8@yahoo.com>
Message-ID: <CAP7+vJ+=mep_=2AqYGffaOA0b97NYfisY_qdN2rQTCHg88FU6A@mail.gmail.com>

I didn't read all that, but my use case is that for typing.py (PEP 484) I
want "nice" reprs for higher-order classes, and if str(int) returned just
'int' it would be much simpler to e.g. make the str() or repr() of
Optional[int] return "Optional[int]". I've sort of solved this already in
https://github.com/ambv/typehinting/blob/master/prototyping/typing.py, but
it would still be nice if I didn't have to work so hard at it.

On Wed, Feb 25, 2015 at 5:41 PM, Andrew Barnert <
abarnert at yahoo.com.dmarc.invalid> wrote:

> On Feb 25, 2015, at 8:21, Guido van Rossum <guido at python.org> wrote:
>
> A variant I've often wanted is to make str() return the "clean" type only
> (e.g. 'int') while leaving repr() alone -- this would address most problems
> Steven brought up.
>
>
> On further thought, I'm not sure backward compatibility can be dismissed.
> Surely code shouldn't care about the str of a callable, right? But yeah,
> sometimes it might care, if it's, say, a reflective framework for bridging
> or debugging or similar.
>
> I once wrote a library that exposed Python objects over AppleEvents. I
> needed to know all the bound methods of the object. For pure-Python
> objects, this is easy, but for builtin/extension objects, the bound methods
> constructed from both slot-wrapper and method-descriptor are of type
> method-wrapper. (The bug turned up when an obscure corner of our code that
> tried lxml, cElementTree, then ElementTree was run on a 2.x system without
> lxml installed, something we'd left out of the test matrix for that module.)
>
> When you print out the method, it's obvious what type it is. But how do
> you deal with it programmatically? Despite what the docs say,
> method-descriptor does not return true for either isbuiltin or isroutine.
> In fact, nothing in inspect is much help. And the type isn't in
> types. Printing out the method, it's dead obvious what it is.
>
> As it turned out, callable(x) and (hasattr(x, 'im_self') or hasattr(x,
> '__self__')) was good enough for my particular use case, at least in all
> Python interpreters/versions we supported (it even works in PyPy with
> CPyExt extension types and an early version of NumPyPy). But what would a
> good general solution be?
>
> I think dynamically constructing the type from int().__add__ and then
> verifying that all extension types in all interpreters and versions I care
> about pass isinstance(that) unless they look like pure-Python bound methods
> might be the best way. And even if I did resort to using the string
> representation, I'd probably use the repr rather than the str, and I'd
> probably look at type(x) rather than x itself. But still, I can imagine
> someone deciding "whatever I do is going to be hacky, because there's no
> non-hacky way to get this information, so I might as well go with the
> simplest hack that's worked on every CPython and PyPy version from 2.2 to
> today" and use "... or str(x).startswith('<method-wrapper')".
>
>
> On Wed, Feb 25, 2015 at 12:12 AM, Serhiy Storchaka <storchaka at gmail.com>
> wrote:
>
>> This idea is already casually mentioned, but sank deep into the threads
>> of the discussion. Raise it up.
>>
>> Currently reprs of classes and functions look as:
>>
>> >>> int
>> <class 'int'>
>> >>> int.from_bytes
>> <built-in method from_bytes of type object at 0x826cf60>
>> >>> open
>> <built-in function open>
>> >>> import collections
>> >>> collections.Counter
>> <class 'collections.Counter'>
>> >>> collections.Counter.fromkeys
>> <bound method Counter.fromkeys of <class 'collections.Counter'>>
>> >>> collections.namedtuple
>> <function namedtuple at 0xb6fc4adc>
>>
>> What if change default reprs of classes and functions to just full
>> qualified name __module__ + '.' + __qualname__ (or just __qualname__ if
>> __module__ is builtins)? This will look more neatly. And such reprs are
>> evaluable.
>>
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
>
>
> --
> --Guido van Rossum (python.org/~guido)
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
>


-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150225/c54275c6/attachment-0001.html>

From g.brandl at gmx.net  Thu Feb 26 06:54:57 2015
From: g.brandl at gmx.net (Georg Brandl)
Date: Thu, 26 Feb 2015 06:54:57 +0100
Subject: [Python-ideas] Reprs of classes and functions
In-Reply-To: <mck04k$kvd$1@ger.gmane.org>
References: <mck04k$kvd$1@ger.gmane.org>
Message-ID: <mcmcfh$afm$1@ger.gmane.org>

On 02/25/2015 09:12 AM, Serhiy Storchaka wrote:
> This idea is already casually mentioned, but sank deep into the threads 
> of the discussion. Raise it up.
> 
> Currently reprs of classes and functions look as:
> 
>  >>> int
> <class 'int'>
>  >>> int.from_bytes
> <built-in method from_bytes of type object at 0x826cf60>
>  >>> open
> <built-in function open>
>  >>> import collections
>  >>> collections.Counter
> <class 'collections.Counter'>
>  >>> collections.Counter.fromkeys
> <bound method Counter.fromkeys of <class 'collections.Counter'>>
>  >>> collections.namedtuple
> <function namedtuple at 0xb6fc4adc>
> 
> What if change default reprs of classes and functions to just full 
> qualified name __module__ + '.' + __qualname__ (or just __qualname__ if 
> __module__ is builtins)? This will look more neatly.

Like other, I would be -1 because of the increased potential for confusion
(many people have not grasped the significance of repr() and its quoting
of strings), and the loss of information (class/function/...).

It would be nice to have the repr()s consistent, i.e. name always or never
in quotes, and always or never add the memory address.  But that's probably
not worth breaking compatibility.

Changing str(class) is different, since there's enough potential confusion
in this representation anyway.

> And such reprs are evaluable.

Only sometimes, unless you change the repr to
"__import__(" + __module__ + ", None, None, ['*'])." + __qualname__

:)

Georg


From brett at python.org  Thu Feb 26 16:35:00 2015
From: brett at python.org (Brett Cannon)
Date: Thu, 26 Feb 2015 15:35:00 +0000
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
References: <mcjuuk$2pt$1@ger.gmane.org>
 <CAMpsgwYc0_xWW3k_55FG4=v4OdkRWqwcMU-B1PEz0AHDjK3aPQ@mail.gmail.com>
Message-ID: <CAP1=2W5JfuKJi2Oi3F5f=ieZCJe=B8FQczYhi9iWKkvAFjv_eA@mail.gmail.com>

On Wed, Feb 25, 2015 at 5:40 PM Victor Stinner <victor.stinner at gmail.com>
wrote:

> Hi,
>
> Python has more and more options to emit warnings or enable more checks.
> Maybe we need one global "strict" option to simply enable all checks and
> warnings at once?
>
> For example, there are deprecation warnings, but also resource warnings.
> Both should be enabled in the strict mode. Another example is the -bb
> command line option (bytes watning). It should also be enabled. More
> example: -tt (warnings on indentation).
>
> It can be a command line option, an environment variable and a function to
> enable it.
>
> Perl has "use strict;".
>
> For the interactive interpreter, the strict code may be enabled by
> default, but only if it's easy to disable it.
>

Regardless of this REPL discussions, having something like a --strict flag
to switch on every single warning that you can switch on to its highest
level would be handy to have.

-Brett


>
> Victor
>
> Le mercredi 25 f?vrier 2015, Serhiy Storchaka <storchaka at gmail.com> a
> ?crit :
>
> What you are think about turning deprecation warnings on by default in the
>> interactive interpreter?
>>
>> Deprecation warnings are silent by default because they just annoys end
>> user that uses old Python program with new Python. End user usually is not
>> a programmer and can't do something with warnings. But in the interactive
>> interpreter the user is the author of executed code and can avoid to use
>> deprecated functions if warned.
>>
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150226/24970e96/attachment.html>

From solipsis at pitrou.net  Thu Feb 26 16:48:17 2015
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 26 Feb 2015 16:48:17 +0100
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
References: <mcjuuk$2pt$1@ger.gmane.org>
 <CAMpsgwYc0_xWW3k_55FG4=v4OdkRWqwcMU-B1PEz0AHDjK3aPQ@mail.gmail.com>
 <CAP1=2W5JfuKJi2Oi3F5f=ieZCJe=B8FQczYhi9iWKkvAFjv_eA@mail.gmail.com>
Message-ID: <20150226164817.32c8e49b@fsol>

On Thu, 26 Feb 2015 15:35:00 +0000
Brett Cannon <brett at python.org> wrote:

> On Wed, Feb 25, 2015 at 5:40 PM Victor Stinner <victor.stinner at gmail.com>
> wrote:
> 
> > Hi,
> >
> > Python has more and more options to emit warnings or enable more checks.
> > Maybe we need one global "strict" option to simply enable all checks and
> > warnings at once?
> >
> > For example, there are deprecation warnings, but also resource warnings.
> > Both should be enabled in the strict mode. Another example is the -bb
> > command line option (bytes watning). It should also be enabled. More
> > example: -tt (warnings on indentation).
> >
> > It can be a command line option, an environment variable and a function to
> > enable it.
> >
> > Perl has "use strict;".
> >
> > For the interactive interpreter, the strict code may be enabled by
> > default, but only if it's easy to disable it.
> >
> 
> Regardless of this REPL discussions, having something like a --strict flag
> to switch on every single warning that you can switch on to its highest
> level would be handy to have.

Isn't that "-Walways" ? (IIRC)

Regards

Antoine.



From random832 at fastmail.us  Thu Feb 26 17:15:17 2015
From: random832 at fastmail.us (random832 at fastmail.us)
Date: Thu, 26 Feb 2015 11:15:17 -0500
Subject: [Python-ideas] Show deprecation warnings in the interactive
 interpreter
In-Reply-To: <mclqen$vej$1@ger.gmane.org>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <mclfqc$pkd$1@ger.gmane.org>
 <1424906844.272258.232513577.6EA7BD96@webmail.messagingengine.com>
 <mclqen$vej$1@ger.gmane.org>
Message-ID: <1424967317.1138049.232834789.45A1EFD9@webmail.messagingengine.com>

On Wed, Feb 25, 2015, at 19:47, Terry Reedy wrote:
> People often experiment with files in editors when writing statements 
> longer than a line or two or when writing multiple statements.  Leaving 
> that aside, people also experiment at *simulated* interactive prompts in 
> guis, such as Idle, that run on Python in normal batch mode.  In either 
> case, changing interactive mode python will have no effect and no
> benefit.

Well obviously another change would have to be made to Idle.

Your argument seems to boil down to "no-one uses the prompt", so why not
just get rid of it?

From p.f.moore at gmail.com  Thu Feb 26 17:20:59 2015
From: p.f.moore at gmail.com (Paul Moore)
Date: Thu, 26 Feb 2015 16:20:59 +0000
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <1424967317.1138049.232834789.45A1EFD9@webmail.messagingengine.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <mclfqc$pkd$1@ger.gmane.org>
 <1424906844.272258.232513577.6EA7BD96@webmail.messagingengine.com>
 <mclqen$vej$1@ger.gmane.org>
 <1424967317.1138049.232834789.45A1EFD9@webmail.messagingengine.com>
Message-ID: <CACac1F9xFJ6E5Qi3kbvUkQ8eUU6yHhpfnvYLpziZGciXeqrGdg@mail.gmail.com>

On 26 February 2015 at 16:15,  <random832 at fastmail.us> wrote:
> On Wed, Feb 25, 2015, at 19:47, Terry Reedy wrote:
>> People often experiment with files in editors when writing statements
>> longer than a line or two or when writing multiple statements.  Leaving
>> that aside, people also experiment at *simulated* interactive prompts in
>> guis, such as Idle, that run on Python in normal batch mode.  In either
>> case, changing interactive mode python will have no effect and no
>> benefit.
>
> Well obviously another change would have to be made to Idle.
>
> Your argument seems to boil down to "no-one uses the prompt", so why not
> just get rid of it?

I use the prompt pretty much all the time (and yes, that's on Windows
- I imagine a lot of Unix users use the prompt :-))
Paul

From tjreedy at udel.edu  Thu Feb 26 18:32:29 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Thu, 26 Feb 2015 12:32:29 -0500
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <1424967317.1138049.232834789.45A1EFD9@webmail.messagingengine.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <mclfqc$pkd$1@ger.gmane.org>
 <1424906844.272258.232513577.6EA7BD96@webmail.messagingengine.com>
 <mclqen$vej$1@ger.gmane.org>
 <1424967317.1138049.232834789.45A1EFD9@webmail.messagingengine.com>
Message-ID: <mcnlbl$fmr$1@ger.gmane.org>

On 2/26/2015 11:15 AM, random832 at fastmail.us 
wrote:

> Your argument seems to boil down to "no-one uses the prompt", so why not
> just get rid of it?

Mangling what people write beyond recognition does not help improve python.

-- 
Terry Jan Reedy


From njs at pobox.com  Thu Feb 26 19:54:02 2015
From: njs at pobox.com (Nathaniel Smith)
Date: Thu, 26 Feb 2015 10:54:02 -0800
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <mcnlbl$fmr$1@ger.gmane.org>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <mclfqc$pkd$1@ger.gmane.org>
 <1424906844.272258.232513577.6EA7BD96@webmail.messagingengine.com>
 <mclqen$vej$1@ger.gmane.org>
 <1424967317.1138049.232834789.45A1EFD9@webmail.messagingengine.com>
 <mcnlbl$fmr$1@ger.gmane.org>
Message-ID: <CAPJVwBmH3eZ3paOaymQHA0O7xhcxWe0iPJ+Ts5gQR-mckmQpaQ@mail.gmail.com>

On Feb 26, 2015 9:33 AM, "Terry Reedy" <tjreedy at udel.edu> wrote:
>
> On 2/26/2015 11:15 AM, random832 at fastmail.us wrote:
>
>> Your argument seems to boil down to "no-one uses the prompt", so why not
>> just get rid of it?
>
>
> Mangling what people write beyond recognition does not help improve
python.

I understand it's frustrating when people write snarky one liners picking
on you, but having read all your comments here (and having tried and then
given up on replying to your last reply to me): if this is not an accurate
summary of what you're trying to argue, then I honestly have no idea what
you are trying to argue.

-n
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150226/1ea4be51/attachment-0001.html>

From stephen at xemacs.org  Fri Feb 27 03:19:55 2015
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Fri, 27 Feb 2015 11:19:55 +0900
Subject: [Python-ideas] Show deprecation warnings in the
	interactive	interpreter
In-Reply-To: <CAPJVwBmH3eZ3paOaymQHA0O7xhcxWe0iPJ+Ts5gQR-mckmQpaQ@mail.gmail.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <mclfqc$pkd$1@ger.gmane.org>
 <1424906844.272258.232513577.6EA7BD96@webmail.messagingengine.com>
 <mclqen$vej$1@ger.gmane.org>
 <1424967317.1138049.232834789.45A1EFD9@webmail.messagingengine.com>
 <mcnlbl$fmr$1@ger.gmane.org>
 <CAPJVwBmH3eZ3paOaymQHA0O7xhcxWe0iPJ+Ts5gQR-mckmQpaQ@mail.gmail.com>
Message-ID: <87h9u8ccz8.fsf@uwakimon.sk.tsukuba.ac.jp>

Nathaniel Smith writes:
 > On Feb 26, 2015 9:33 AM, "Terry Reedy" <tjreedy at udel.edu> wrote:
 > > On 2/26/2015 11:15 AM, random832 at fastmail.us wrote:

 > >> Your argument seems to boil down to "no-one uses the prompt", so why not
 > >> just get rid of it?

 > > [Hardly.]

 > [I]f this is not an accurate summary of what you're trying to
 > argue, then I honestly have no idea what you are trying to argue.

I'm surprised at "no idea".  He's been clear throughout that because
there are *many* alternative interactive interfaces besides the
CPython interpreter itself, to be truly effective the warnings need to
be propagated to those, and it's not automatic because most of them
don't use the interactive interpreter.  IIUC, at least in IDLE it's
already possible to get them with some fiddling.  Since it's debatable
how important this is to interactive prompt users (especially those on
POSIX systems), he thinks that to be effective it requires much more
to change than just the CPython interpreter, and since it may annoy
more users than would be helped, he doesn't seem to think the change
is justified.

I think personally I tend toward the side that the proposed warning
display is a good idea even if of limited benefit, it's not likely to
bother too many users, and it can be extended as desired to the
alternative interactive UIs.  But that doesn't mean I don't understand
his point of view.

From njs at pobox.com  Fri Feb 27 03:43:53 2015
From: njs at pobox.com (Nathaniel Smith)
Date: Thu, 26 Feb 2015 18:43:53 -0800
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <87h9u8ccz8.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <mclfqc$pkd$1@ger.gmane.org>
 <1424906844.272258.232513577.6EA7BD96@webmail.messagingengine.com>
 <mclqen$vej$1@ger.gmane.org>
 <1424967317.1138049.232834789.45A1EFD9@webmail.messagingengine.com>
 <mcnlbl$fmr$1@ger.gmane.org>
 <CAPJVwBmH3eZ3paOaymQHA0O7xhcxWe0iPJ+Ts5gQR-mckmQpaQ@mail.gmail.com>
 <87h9u8ccz8.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <CAPJVwB=NcGvYB77MyCOnzXO7s_LaOC=UNeZxjcG5YB7fbS0+hQ@mail.gmail.com>

On Feb 26, 2015 6:20 PM, "Stephen J. Turnbull" <stephen at xemacs.org> wrote:
>
> Nathaniel Smith writes:
>  > On Feb 26, 2015 9:33 AM, "Terry Reedy" <tjreedy at udel.edu> wrote:
>  > > On 2/26/2015 11:15 AM, random832 at fastmail.us wrote:
>
>  > >> Your argument seems to boil down to "no-one uses the prompt", so
why not
>  > >> just get rid of it?
>
>  > > [Hardly.]
>
>  > [I]f this is not an accurate summary of what you're trying to
>  > argue, then I honestly have no idea what you are trying to argue.
>
> I'm surprised at "no idea".  He's been clear throughout that because
> there are *many* alternative interactive interfaces besides the
> CPython interpreter itself, to be truly effective the warnings need to
> be propagated to those, and it's not automatic because most of them
> don't use the interactive interpreter.  IIUC, at least in IDLE it's
> already possible to get them with some fiddling.  Since it's debatable
> how important this is to interactive prompt users (especially those on
> POSIX systems), he thinks that to be effective it requires much more
> to change than just the CPython interpreter, and

Right, I think everyone can agree with all that.

> since it may annoy
> more users than would be helped, he doesn't seem to think the change
> is justified.

... But afaict Terry has not said one word of this; either I'm missing
something or you're doing the same thing as random832 and filling in the
gaps with guesses. Which is totally reasonable, natural language
understanding always requires filling in gaps like this, and I'm not
annoyed at anyone or anything
(maybe my last message came across as somewhat hostile? My apologies if
so). Really I just think Terry should realize that they haven't been as
clear as they think and elaborate, because I actually would like to know
what they're saying.

-n
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150226/e820e6fd/attachment.html>

From lkb.teichmann at gmail.com  Fri Feb 27 17:56:08 2015
From: lkb.teichmann at gmail.com (Martin Teichmann)
Date: Fri, 27 Feb 2015 17:56:08 +0100
Subject: [Python-ideas] An even simpler customization of class creation
Message-ID: <CAK9R32Q5NrohxG_Et_023pYrTLA8XJfqFq_6VKbO-6q61BhEnQ@mail.gmail.com>

Hello everybody,

recently, I started a discussion on how metaclasses could be
improved. This lead to a major discussion about PEP 422,
where I discussed some possible changes to PEP 422.
Nick proposed I write a competing PEP, so that the community
may decide.

Here it comes. I propose that a class may define a classmethod
called __init_subclass__ which initialized subclasses of this
class, as a lightweight version of metaclasses.

Compared to PEP 422, this proposal is much simpler to implement,
but I do believe that it also makes more sense. PEP 422
has a method called __autodecorate__ which acts on the class
currently being generated. This is normally not what one wants:
when writing a registering class, for example, one does not want
to register the registering class itself. Most evident it become if
one wants to introduce such a decorator as a mixin: certainly one
does not want to tamper with the mixin class, but with the class
it is mixed in!

A lot of the text in my new PEP has been stolen from PEP 422.
This is why I hesitated writing myself as the only author of my
new PEP. On the other hand, I guess the original authors don't
want to be bothered with my stuff, so I credited them in the
body of my new PEP.

Greetings

Martin

And here it comes, the draft for a new PEP:

PEP: 422a
Title: Simpler customisation of class creation
Version: $Revision$
Last-Modified: $Date$
Author: Martin Teichmann <lkb.teichmann at gmail.com>,
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 27-Feb-2015
Python-Version: 3.5
Post-History: 27-Feb-2015
Replaces: 422


Abstract
========

Currently, customising class creation requires the use of a custom metaclass.
This custom metaclass then persists for the entire lifecycle of the class,
creating the potential for spurious metaclass conflicts.

This PEP proposes to instead support a wide range of customisation
scenarios through a new ``namespace`` parameter in the class header, and
a new ``__init_subclass__`` hook in the class body.

The new mechanism should be easier to understand and use than
implementing a custom metaclass, and thus should provide a gentler
introduction to the full power Python's metaclass machinery.


Connection to other PEP
=======================

This is a competing proposal to PEP 422 by Nick Coughlan and Daniel Urban.
It shares both most of the PEP text and proposed code, but has major
differences in how to achieve its goals.

Background
==========

For an already created class ``cls``, the term "metaclass" has a clear
meaning: it is the value of ``type(cls)``.

*During* class creation, it has another meaning: it is also used to refer to
the metaclass hint that may be provided as part of the class definition.
While in many cases these two meanings end up referring to one and the same
object, there are two situations where that is not the case:

* If the metaclass hint refers to an instance of ``type``, then it is
  considered as a candidate metaclass along with the metaclasses of all of
  the parents of the class being defined. If a more appropriate metaclass is
  found amongst the candidates, then it will be used instead of the one
  given in the metaclass hint.
* Otherwise, an explicit metaclass hint is assumed to be a factory function
  and is called directly to create the class object. In this case, the final
  metaclass will be determined by the factory function definition. In the
  typical case (where the factory functions just calls ``type``, or, in
  Python 3.3 or later, ``types.new_class``) the actual metaclass is then
  determined based on the parent classes.

It is notable that only the actual metaclass is inherited - a factory
function used as a metaclass hook sees only the class currently being
defined, and is not invoked for any subclasses.

In Python 3, the metaclass hint is provided using the ``metaclass=Meta``
keyword syntax in the class header. This allows the ``__prepare__`` method
on the metaclass to be used to create the ``locals()`` namespace used during
execution of the class body (for example, specifying the use of
``collections.OrderedDict`` instead of a regular ``dict``).

In Python 2, there was no ``__prepare__`` method (that API was added for
Python 3 by PEP 3115). Instead, a class body could set the ``__metaclass__``
attribute, and the class creation process would extract that value from the
class namespace to use as the metaclass hint. There is `published code`_ that
makes use of this feature.

Another new feature in Python 3 is the zero-argument form of the ``super()``
builtin, introduced by PEP 3135. This feature uses an implicit ``__class__``
reference to the class being defined to replace the "by name" references
required in Python 2. Just as code invoked during execution of a Python 2
metaclass could not call methods that referenced the class by name (as the
name had not yet been bound in the containing scope), similarly, Python 3
metaclasses cannot call methods that rely on the implicit ``__class__``
reference (as it is not populated until after the metaclass has returned
control to the class creation machinery).

Finally, when a class uses a custom metaclass, it can pose additional
challenges to the use of multiple inheritance, as a new class cannot
inherit from parent classes with unrelated metaclasses. This means that
it is impossible to add a metaclass to an already published class: such
an addition is a backwards incompatible change due to the risk of metaclass
conflicts.


Proposal
========

This PEP proposes that a new mechanism to customise class creation be
added to Python 3.5 that meets the following criteria:

1. Integrates nicely with class inheritance structures (including mixins and
   multiple inheritance),
2. Integrates nicely with the implicit ``__class__`` reference and
   zero-argument ``super()`` syntax introduced by PEP 3135,
3. Can be added to an existing base class without a significant risk of
   introducing backwards compatibility problems, and
4. Restores the ability for class namespaces to have some influence on the
   class creation process (above and beyond populating the namespace itself),
   but potentially without the full flexibility of the Python 2 style
   ``__metaclass__`` hook.

Those goals can be achieved by adding two functionalities:

1. A ``__init_subclass__`` hook that initializes all subclasses of a
   given class, and
2. A new keyword parameter ``namespace`` to the class creation statement,
   that gives an initialization of the namespace.

As an example, the first proposal looks as follows::

   class SpamBase:
       # this is implicitly a @classmethod
       def __init_subclass__(cls, ns, **kwargs):
           # This is invoked after a subclass is created, but before
           # explicit decorators are called.
           # The usual super() mechanisms are used to correctly support
           # multiple inheritance.
  # ns is the classes namespace
           # **kwargs are the keyword arguments to the subclasses'
           # class creation statement
           super().__init_subclass__(cls, ns, **kwargs)

   class Spam(SpamBase):
       pass
   # the new hook is called on Spam

To simplify the cooperative multiple inheritance case, ``object`` will gain
a default implementation of the hook that does nothing::

   class object:
       def __init_subclass__(cls, ns):
           pass

Note that this method has no keyword arguments, meaning that all
methods which are more specialized have to process all keyword
arguments.

This general proposal is not a new idea (it was first suggested for
inclusion in the language definition `more than 10 years ago`_, and a
similar mechanism has long been supported by `Zope's ExtensionClass`_),
but the situation has changed sufficiently in recent years that
the idea is worth reconsidering for inclusion.

The second part of the proposal is to have a ``namespace`` keyword
argument to the class declaration statement. If present, its value
will be called without arguments to initialize a subclasses
namespace, very much like a metaclass ``__prepare__`` method would
do.

In addition, the introduction of the metaclass ``__prepare__`` method
in PEP 3115 allows a further enhancement that was not possible in
Python 2: this PEP also proposes that ``type.__prepare__`` be updated
to accept a factory function as a ``namespace`` keyword-only argument.
If present, the value provided as the ``namespace`` argument will be
called without arguments to create the result of ``type.__prepare__``
instead of using a freshly created dictionary instance. For example,
the following will use an ordered dictionary as the class namespace::

   class OrderedBase(namespace=collections.OrderedDict):
        pass

   class Ordered(OrderedBase):
        # cls.__dict__ is still a read-only proxy to the class namespace,
        # but the underlying storage is an OrderedDict instance


.. note::

    This PEP, along with the existing ability to use  __prepare__ to share a
    single namespace amongst multiple class objects, highlights a possible
    issue with the attribute lookup caching: when the underlying mapping is
    updated by other means, the attribute lookup cache is not invalidated
    correctly (this is a key part of the reason class ``__dict__`` attributes
    produce a read-only view of the underlying storage).

    Since the optimisation provided by that cache is highly desirable,
    the use of a preexisting namespace as the class namespace may need to
    be declared as officially unsupported (since the observed behaviour is
    rather strange when the caches get out of sync).


Key Benefits
============


Easier use of custom namespaces for a class
-------------------------------------------

Currently, to use a different type (such as ``collections.OrderedDict``) for
a class namespace, or to use a pre-populated namespace, it is necessary to
write and use a custom metaclass. With this PEP, using a custom namespace
becomes as simple as specifying an appropriate factory function in the
class header.


Easier inheritance of definition time behaviour
-----------------------------------------------

Understanding Python's metaclasses requires a deep understanding of
the type system and the class construction process. This is legitimately
seen as challenging, due to the need to keep multiple moving parts (the code,
the metaclass hint, the actual metaclass, the class object, instances of the
class object) clearly distinct in your mind. Even when you know the rules,
it's still easy to make a mistake if you're not being extremely careful.

Understanding the proposed implicit class initialization hook only requires
ordinary method inheritance, which isn't quite as daunting a task. The new
hook provides a more gradual path towards understanding all of the phases
involved in the class definition process.


Reduced chance of metaclass conflicts
-------------------------------------

One of the big issues that makes library authors reluctant to use metaclasses
(even when they would be appropriate) is the risk of metaclass conflicts.
These occur whenever two unrelated metaclasses are used by the desired
parents of a class definition. This risk also makes it very difficult to
*add* a metaclass to a class that has previously been published without one.

By contrast, adding an ``__init_subclass__`` method to an existing type poses
a similar level of risk to adding an ``__init__`` method: technically, there
is a risk of breaking poorly implemented subclasses, but when that occurs,
it is recognised as a bug in the subclass rather than the library author
breaching backwards compatibility guarantees.


Integrates cleanly with \PEP 3135
---------------------------------

Given that the method is called on already existing classes, the new
hook will be able to freely invoke class methods that rely on the
implicit ``__class__`` reference introduced by PEP 3135, including
methods that use the zero argument form of ``super()``.


Replaces many use cases for dynamic setting of ``__metaclass__``
----------------------------------------------------------------

For use cases that don't involve completely replacing the defined
class, Python 2 code that dynamically set ``__metaclass__`` can now
dynamically set ``__init_subclass__`` instead. For more advanced use
cases, introduction of an explicit metaclass (possibly made available
as a required base class) will still be necessary in order to support
Python 3.


A path of introduction into Python
==================================

Most of the benefits of this PEP can already be implemented using
a simple metaclass. For the ``__init_subclass__`` hook this works
all the way down to python 2.7, while the namespace needs python 3.0
to work. Such a class has been `uploaded to PyPI`_.

The only drawback of such a metaclass are the mentioned problems with
metaclasses and multiple inheritance. Two classes using such a
metaclass can only be combined, if they use exactly the same such
metaclass. This fact calls for the inclusion of such a class into the
standard library, let's call it ``SubclassMeta``, with a base class
using it called ``SublassInit``. Once all users use this standard
library metaclass, classes from different packages can easily be
combined.

But still such classes cannot be easily combined with other classes
using other metaclasses. Authors of metaclasses should bear that in
mind and inherit from the standard metaclass if it seems useful
for users of the metaclass to add more functionality. Ultimately,
if the need for combining with other metaclasses is strong enough,
the proposed functionality may be introduced into python's ``type``.

Those arguments strongly hint to the following procedure to include
the proposed functionality into python:

1. The metaclass implementing this proposal is put onto PyPI, so that
   it can be used and scrutinized.
2. Once the code is properly mature, it can be added to the python
   standard library. There should be a new module called
   ``metaclass`` which collects tools for metaclass authors, as well
   as a documentation of the best practices of how to write
   metaclasses.
3. If the need of combining this metaclass with other metaclasses is
   strong enough, it may be included into python itself.


New Ways of Using Classes
=========================

This proposal has many usecases like the following. In the examples,
we still inherit from the ``SubclassInit`` base class. This would
become unnecessary once this PEP is included in Python directly.

Subclass registration
---------------------

Especially when writing a plugin system, one likes to register new
subclasses of a plugin baseclass. This can be done as follows::

   class PluginBase(SubclassInit):
       subclasses = []

       def __init_subclass__(cls, ns, **kwargs):
           super().__init_subclass__(ns, **kwargs)
           cls.subclasses.append(cls)

One should note that this also works nicely as a mixin class.

Trait descriptors
-----------------

There are many designs of python descriptors in the wild which, for
example, check boundaries of values. Often those "traits" need some support
of a metaclass to work. This is how this would look like with this
PEP::

   class Trait:
       def __get__(self, instance, owner):
           return instance.__dict__[self.key]

       def __set__(self, instance, value):
           instance.__dict__[self.key] = value

   class Int(Trait):
       def __set__(self, instance, value):
           # some boundary check code here
           super().__set__(instance, value)

   class HasTraits(SubclassInit):
       def __init_subclass__(cls, ns, **kwargs):
  super().__init_subclass__(ns, **kwargs)
           for k, v in ns.items():
               if isinstance(v, Trait):
                   v.key = k

The new ``namespace`` keyword in the class header enables a number of
interesting options for controlling the way a class is initialised,
including some aspects of the object models of both Javascript and Ruby.


Order preserving classes
------------------------

::

    class OrderedClassBase(namespace=collections.OrderedDict):
        pass

    class OrderedClass(OrderedClassBase):
        a = 1
        b = 2
        c = 3


Prepopulated namespaces
-----------------------

::

    seed_data = dict(a=1, b=2, c=3)
    class PrepopulatedClass(namespace=seed_data.copy):
        pass


Cloning a prototype class
-------------------------

::

    class NewClass(namespace=Prototype.__dict__.copy):
        pass


Rejected Design Options
=======================


Calling the hook on the class itself
------------------------------------

Adding an ``__autodecorate__`` hook that would be called on the class
itself was the proposed idea of PEP 422.  Most examples work the same
way or even better if the hook is called on the subclass. In general,
it is much easier to explicitly call the hook on the class in which it
is defined (to opt-in to such a behavior) than to opt-out, meaning
that one does not want the hook to be called on the class it is
defined in.

This becomes most evident if the class in question is designed as a
mixin: it is very unlikely that the code of the mixin is to be
executed for the mixin class itself, as it is not supposed to be a
complete class on its own.

The original proposal also made major changes in the class
initialization process, rendering it impossible to back-port the
proposal to older python versions.


Other variants of calling the hook
----------------------------------

Other names for the hook were presented, namely ``__decorate__`` or
``__autodecorate__``. This proposal opts for ``__init_subclass__`` as
it is very close to the ``__init__`` method, just for the subclass,
while it is not very close to decorators, as it does not return the
class.


Requiring an explicit decorator on ``__init_subclass__``
--------------------------------------------------------

One could require the explicit use of ``@classmethod`` on the
``__init_subclass__`` decorator. It was made implicit since there's no
sensible interpretation for leaving it out, and that case would need
to be detected anyway in order to give a useful error message.

This decision was reinforced after noticing that the user experience of
defining ``__prepare__`` and forgetting the ``@classmethod`` method
decorator is singularly incomprehensible (particularly since PEP 3115
documents it as an ordinary method, and the current documentation doesn't
explicitly say anything one way or the other).


Passing in the namespace directly rather than a factory function
----------------------------------------------------------------

At one point, PEP 422 proposed that the class namespace be passed
directly as a keyword argument, rather than passing a factory function.
However, this encourages an unsupported behaviour (that is, passing the
same namespace to multiple classes, or retaining direct write access
to a mapping used as a class namespace), so the API was switched to
the factory function version.


Possible Extensions
===================

Some extensions to this PEP are imaginable, which are postponed to a
later pep:

* A ``__new_subclass__`` method could be defined which acts like a
  ``__new__`` for classes. This would be very close to
  ``__autodecorate__`` in PEP 422.
* ``__subclasshook__`` could be made a classmethod in a class instead
  of a method in the metaclass.

References
==========

.. _published code:
   http://mail.python.org/pipermail/python-dev/2012-June/119878.html

.. _more than 10 years ago:
   http://mail.python.org/pipermail/python-dev/2001-November/018651.html

.. _Zope's ExtensionClass:
   http://docs.zope.org/zope_secrets/extensionclass.html

.. _uploaded to PyPI:
   https://pypi.python.org/pypi/metaclass

Copyright
=========

This document has been placed in the public domain.

From tjreedy at udel.edu  Fri Feb 27 22:12:01 2015
From: tjreedy at udel.edu (Terry Reedy)
Date: Fri, 27 Feb 2015 16:12:01 -0500
Subject: [Python-ideas] Show deprecation warnings in the interactive
	interpreter
In-Reply-To: <CAPJVwB=NcGvYB77MyCOnzXO7s_LaOC=UNeZxjcG5YB7fbS0+hQ@mail.gmail.com>
References: <mcjuuk$2pt$1@ger.gmane.org> <85wq365qqx.fsf@benfinney.id.au>
 <mclfqc$pkd$1@ger.gmane.org>
 <1424906844.272258.232513577.6EA7BD96@webmail.messagingengine.com>
 <mclqen$vej$1@ger.gmane.org>
 <1424967317.1138049.232834789.45A1EFD9@webmail.messagingengine.com>
 <mcnlbl$fmr$1@ger.gmane.org>
 <CAPJVwBmH3eZ3paOaymQHA0O7xhcxWe0iPJ+Ts5gQR-mckmQpaQ@mail.gmail.com>
 <87h9u8ccz8.fsf@uwakimon.sk.tsukuba.ac.jp>
 <CAPJVwB=NcGvYB77MyCOnzXO7s_LaOC=UNeZxjcG5YB7fbS0+hQ@mail.gmail.com>
Message-ID: <mcqmj9$91f$1@ger.gmane.org>

On 2/26/2015 9:43 PM, Nathaniel Smith wrote:
> On Feb 26, 2015 6:20 PM, "Stephen J. Turnbull"
>  > Nathaniel Smith writes:

>  >  >  I honestly have no idea what [Terry is] trying to argue.
>  >
>  > I'm surprised at "no idea".  He's been clear throughout that because
>  > there are *many* alternative interactive interfaces besides the
>  > CPython interpreter itself, to be truly effective the warnings need to
>  > be propagated to those, and it's not automatic because most of them
>  > don't use the interactive interpreter.

Nice summary, Stephen.  Thanks.  Add the fact that interactive 
development takes place in editable text as well as at a '>>> ' prompt. 
  In Idle, adding a new statement to the current session at '>>> ' and 
hitting return and adding a new statement to a current file and hitting 
F5 are similar actions. (And they are handled nearly the same by Idle.) 
There are pluses and minuses both ways and I use both, ofter 
interleaved.  I also noted that I occasionally use the real interactive 
interpreter to work on Idle.

Guido said that it is OK to help some but not all, and that he would 
leave it to IDE maintainers to extend the idea.  Using Nick's hints, I 
worked out two ways to apply the idea to Idle, one trivial, one not. 
The trivial fix would apply to all code, whether submitted by '\n' or 
F5.  The other fix could apply to both, probably more easily than not.

> Right, I think everyone can agree with all that.
>
>  > since it may annoy
>  > more users than would be helped, he doesn't seem to think the change
>  > is justified.
>
> ... But afaict Terry has not said one word of this;

Nathanial, you are right here.  I like to understand the practical 
implications of an idea like this before making a strong judgement.

-- 
Terry Jan Reedy


From ncoghlan at gmail.com  Sat Feb 28 18:10:21 2015
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 1 Mar 2015 03:10:21 +1000
Subject: [Python-ideas] An even simpler customization of class creation
In-Reply-To: <CAK9R32Q5NrohxG_Et_023pYrTLA8XJfqFq_6VKbO-6q61BhEnQ@mail.gmail.com>
References: <CAK9R32Q5NrohxG_Et_023pYrTLA8XJfqFq_6VKbO-6q61BhEnQ@mail.gmail.com>
Message-ID: <CADiSq7eOgAbR1TWDaEzxRgSoqX6=Ztzs62YPLCuV-Y5uya1isQ@mail.gmail.com>

On 28 February 2015 at 02:56, Martin Teichmann <lkb.teichmann at gmail.com> wrote:
> Hello everybody,
>
> recently, I started a discussion on how metaclasses could be
> improved. This lead to a major discussion about PEP 422,
> where I discussed some possible changes to PEP 422.
> Nick proposed I write a competing PEP, so that the community
> may decide.
>
> Here it comes. I propose that a class may define a classmethod
> called __init_subclass__ which initialized subclasses of this
> class, as a lightweight version of metaclasses.
>
> Compared to PEP 422, this proposal is much simpler to implement,
> but I do believe that it also makes more sense. PEP 422
> has a method called __autodecorate__ which acts on the class
> currently being generated. This is normally not what one wants:
> when writing a registering class, for example, one does not want
> to register the registering class itself. Most evident it become if
> one wants to introduce such a decorator as a mixin: certainly one
> does not want to tamper with the mixin class, but with the class
> it is mixed in!

Aye, I like this design, it just wasn't until I saw it written out
fully that it finally clicked why restricting it to subclasses let
both zero-argument super() and explicit namespace access to the
defining class both work. Getting the latter to also work is a
definite strength of your approach over mine.

> A lot of the text in my new PEP has been stolen from PEP 422.
> This is why I hesitated writing myself as the only author of my
> new PEP. On the other hand, I guess the original authors don't
> want to be bothered with my stuff, so I credited them in the
> body of my new PEP.

New PEP numbers can be a good idea for significant redesigns anyway,
otherwise people may end up having to qualify which version they're
talking about while deciding between two competing approaches.

In this case, unless someone comes up with an objection I've missed, I
think your __init_subclass__ design is just plain better than my
__autodecorate__ idea, and as you say, if we find a pressing need for
it, we could always add "__new_subclass__" later to get comparable
functionality. Assuming we don't hear any such objections, I'll
withdraw PEP 422 and we can focus on getting PEP 487 ready for
submission to python-dev.

That leaves my one concern being the "namespace" keyword parameter.
I'd started reconsidering that for PEP 422, and I think the argument
holds at least as strongly for your improved design:
https://www.python.org/dev/peps/pep-0422/#is-the-namespace-concept-worth-the-extra-complexity

Since speed usually isn't a critical concern for class definitions,
it's likely better to just make them ordered by default, which can be
its own PEP, rather than being part of this one.

Regards,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

From yawar.amin at gmail.com  Sat Feb 28 21:15:26 2015
From: yawar.amin at gmail.com (Yawar Amin)
Date: Sat, 28 Feb 2015 15:15:26 -0500
Subject: [Python-ideas] Comparable exceptions
In-Reply-To: <CANkHFr_RiT+08sD96JSFY0o9OJA2qLTV=3EMHYFq2--LpA=7-g@mail.gmail.com>
References: <CANkHFr95jcgQDDeNxrmyiu9guAQ=iZ2jfe=gardHv0qc=Um9hQ@mail.gmail.com>
 <CAO41-mNUMPjrzDDd5xgvyp_Gh25qzezY3Q2eTwMporPVsntNPA@mail.gmail.com>
 <CANkHFr_RiT+08sD96JSFY0o9OJA2qLTV=3EMHYFq2--LpA=7-g@mail.gmail.com>
Message-ID: <54F221DE.6010509@gmail.com>

Hi,

On 2015-02-25 08:03, Ionel Cristian Mrie wrote:
> Yes, it does, however my assertion looks similar to this:
>
>     assert result == [1,2, (3, 4, {"bubu": OSError('foobar')})]

How about reifying the error? I.e., instead of storing the OSError
object, you wrap up the error info ('foobar') in a custom Result
(e.g.) object that you implement your own comparison for? Then, instead
of allowing your function to raise and exception, you catch the
exception and wrap its data in a Result object?

Regards,

Yawar


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 834 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150228/efed0afd/attachment.sig>