From jackdied at gmail.com  Tue Jun  1 01:31:06 2010
From: jackdied at gmail.com (Jack Diederich)
Date: Mon, 31 May 2010 19:31:06 -0400
Subject: [Python-ideas] An identity dict
In-Reply-To: <hu0tq6$ai7$1@dough.gmane.org>
References: <loom.20100530T052013-34@post.gmane.org>
	<htta0r$9l5$1@dough.gmane.org>
	<loom.20100530T162050-351@post.gmane.org>
	<httueo$3de$1@dough.gmane.org>
	<79AAB0F9-3775-40A4-9408-2A8286FC6EDB@gmail.com>
	<hu0tq6$ai7$1@dough.gmane.org>
Message-ID: <AANLkTimyTecY3kapdkE3TbabVGp9wRt5Zh6ZQewP7Mpt@mail.gmail.com>

On Mon, May 31, 2010 at 2:05 PM, Terry Reedy <tjreedy at udel.edu> wrote:
> current vote: -.3
> I am also not yet convinced, but perhaps could be, that either type, with or
> without generalization should be in the stdlib. Instances of user class
> without custom equality are already compared by identity. The use cases for
> keying immutables by identify is pretty sparse. That pretty much leave
> mutables with custom equality (by value rather than identity).

I'm -1 on the idea without a strong use case.  I vaguely recall
implementing one of these before but I think I was using it as a hacky
weakrefdict.  Looking in my libmisc.py for dict-alikes I see an
OrderedDict (obsoleted), a ForgivingDict (obsoleted by defaultdict), a
ProxyDict, and a DecorateDict.  The ProxyDict can push/pop dicts and
does lookups across all of them, most recent first, and performs sets
in the most recent.  The DecorateDict calls a function on the value
before returning it.  Django has classes with almost the exact same
code (not contributed by me).

Django:
http://code.djangoproject.com/svn/django/trunk/django/utils/datastructures.py
Me:
http://bazaar.launchpad.net/~odbrazen/leanlyn/trunk/annotate/head:/libmisc.py

-Jack


From benjamin at python.org  Tue Jun  1 02:44:41 2010
From: benjamin at python.org (Benjamin Peterson)
Date: Tue, 1 Jun 2010 00:44:41 +0000 (UTC)
Subject: [Python-ideas] An identity dict
References: <loom.20100530T052013-34@post.gmane.org>	<htta0r$9l5$1@dough.gmane.org>
	<loom.20100530T162050-351@post.gmane.org>
	<httueo$3de$1@dough.gmane.org>
	<79AAB0F9-3775-40A4-9408-2A8286FC6EDB@gmail.com>
Message-ID: <loom.20100601T023325-567@post.gmane.org>

Raymond Hettinger <raymond.hettinger at ...> writes:
> Also, I haven't seen much of a discussion of use cases.

Here's a selection of use cases from PyPy's source (You can search for
"identity_dict" to see its use):

In a algorithm for breaking cycles in graphs:
http://codespeak.net/svn/pypy/trunk/pypy/tool/algo/graphlib.py

Keeping track of all the allocated objects in a model of a low level runtime:
http://codespeak.net/svn/pypy/trunk/pypy/rpython/lltypesystem/lltype.py

Tracing the source of a certain kind of type as our type checker annotate
RPython: http://codespeak.net/svn/pypy/trunk/pypy/annotation/bookkeeper.py

Traversing the blocks of a function's graph:
http://codespeak.net/svn/pypy/trunk/pypy/objspace/flow/model.py

Essentially these are places where defined equality should not matter.


I could also use it here:
http://code.activestate.com/recipes/577242-calling-c-level-finalizers-without-__del__/



From benjamin at python.org  Tue Jun  1 02:45:45 2010
From: benjamin at python.org (Benjamin Peterson)
Date: Tue, 1 Jun 2010 00:45:45 +0000 (UTC)
Subject: [Python-ideas] An identity dict
References: <loom.20100530T052013-34@post.gmane.org>
	<htta0r$9l5$1@dough.gmane.org>
	<loom.20100530T162050-351@post.gmane.org>
	<4C030321.4050803@canterbury.ac.nz>
	<89E5DB78-B304-4A9F-B140-96888B2FCCC7@gmail.com>
Message-ID: <loom.20100601T024511-751@post.gmane.org>

Raymond Hettinger <raymond.hettinger at ...> writes:
> Also, there hasn't been much discussion of implementation,
> but unless you're willing to copy and paste most of the
> code in dictobject.c, you're going to end-up with something
> much slower than d[id(obj)]=value.

It can be implemented simply in Python:
http://codespeak.net/svn/pypy/trunk/pypy/lib/identity_dict.py






From raymond.hettinger at gmail.com  Tue Jun  1 03:23:18 2010
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Mon, 31 May 2010 18:23:18 -0700
Subject: [Python-ideas] An identity dict
In-Reply-To: <loom.20100601T024511-751@post.gmane.org>
References: <loom.20100530T052013-34@post.gmane.org>
	<htta0r$9l5$1@dough.gmane.org>
	<loom.20100530T162050-351@post.gmane.org>
	<4C030321.4050803@canterbury.ac.nz>
	<89E5DB78-B304-4A9F-B140-96888B2FCCC7@gmail.com>
	<loom.20100601T024511-751@post.gmane.org>
Message-ID: <A13D6A8C-C5EE-4AC7-A419-85CEC08F981A@gmail.com>


On May 31, 2010, at 5:45 PM, Benjamin Peterson wrote:

> Raymond Hettinger <raymond.hettinger at ...> writes:
>> Also, there hasn't been much discussion of implementation,
>> but unless you're willing to copy and paste most of the
>> code in dictobject.c, you're going to end-up with something
>> much slower than d[id(obj)]=value.
> 
> It can be implemented simply in Python:
> http://codespeak.net/svn/pypy/trunk/pypy/lib/identity_dict.py

That code is pretty much what I expected.
In CPython, it is dramatically slower than
using a regular dictionary with d[id(obj)]=value.
In PyPy, it makes sense because the code
gets optimized as if it were hand coded in C.
IOW, identity_dict.py doesn't make much sense
for other implementations.

> Here's a selection of use cases from PyPy's source (You can search for
> "identity_dict" to see its use):
> 
> In a algorithm for breaking cycles in graphs:
> http://codespeak.net/svn/pypy/trunk/pypy/tool/algo/graphlib.py

This is code that doesn't require or benefit from using an identity dictionary.
Regular dicts work just fine here.  And since, identity-implies-equality
for regular CPython dicts, you already get excellent performance
(i.e. the __eq__ methods never get called when the object identities
already match).


> Keeping track of all the allocated objects in a model of a low level runtime:
> http://codespeak.net/svn/pypy/trunk/pypy/rpython/lltypesystem/lltype.py

This is a ton of code and I can't easily tell what it is doing or comment on it.


> Tracing the source of a certain kind of type as our type checker annotate
> RPython: http://codespeak.net/svn/pypy/trunk/pypy/annotation/bookkeeper.py

Looks to be another case where a regular dict works just fine.


> Traversing the blocks of a function's graph:
> http://codespeak.net/svn/pypy/trunk/pypy/objspace/flow/model.py

This code also works fine with a regular dictionary or a regular python set.
If you used the identity_dict.py code mentioned above, it would just
slow down the code.   This isn't really even a dictionary use case,
a set would be a better choice.


> Essentially these are places where defined equality should not matter.

Essentially, these are cases where an identity dictionary isn't
necessary and would in-fact be worse performance-wise
in every implementation except for PyPy which can compile
the pure python code for indentity_dict.py.

Since instances have a default hash equal to the id and since
identity-implies-equality for dictionary keys, we already have
a dictionary that handles these cases.  You don't even
have to type:  d[id(k)]=value, it would suffice to write:  d[k]=value.

Sorry, but I think this idea is a total waste.  Perhaps post it as
a recipe, but it doesn't make sense to try to inject it into the
standard library.


Raymond



From benjamin at python.org  Tue Jun  1 04:31:39 2010
From: benjamin at python.org (Benjamin Peterson)
Date: Tue, 1 Jun 2010 02:31:39 +0000 (UTC)
Subject: [Python-ideas] An identity dict
References: <loom.20100530T052013-34@post.gmane.org>
	<htta0r$9l5$1@dough.gmane.org>
Message-ID: <loom.20100601T043013-773@post.gmane.org>

Lie Ryan <lie.1296 at ...> writes:
> that their id() is expensive is implementation details, and the
> developer of PyPy should solve that instead of adding a clutch to the
> stdlib.

The stdlib isn't just about CPython. We already have optimized primitives for
CPython, so I don't see why helping other implementations isn't a good cause.






From raymond.hettinger at gmail.com  Tue Jun  1 06:37:05 2010
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Mon, 31 May 2010 21:37:05 -0700
Subject: [Python-ideas] An identity dict
In-Reply-To: <loom.20100601T043013-773@post.gmane.org>
References: <loom.20100530T052013-34@post.gmane.org>
	<htta0r$9l5$1@dough.gmane.org>
	<loom.20100601T043013-773@post.gmane.org>
Message-ID: <7AC8DB63-DAD6-46EA-89B1-AA339E4D7B43@gmail.com>


On May 31, 2010, at 7:31 PM, Benjamin Peterson wrote:

> Lie Ryan <lie.1296 at ...> writes:
>> that their id() is expensive is implementation details, and the
>> developer of PyPy should solve that instead of adding a clutch to the
>> stdlib.
> 
> The stdlib isn't just about CPython. We already have optimized primitives for
> CPython, so I don't see why helping other implementations isn't a good cause.

Benjamin, could you elaborate of several points that are unclear:

* If id() is expensive in PyPy, then how are they helped by the code in 
http://codespeak.net/svn/pypy/trunk/pypy/lib/identity_dict.py
which uses id() for the gets and sets and contains?

* In the examples you posted (such as http://codespeak.net/svn/pypy/trunk/pypy/tool/algo/graphlib.py ),
it appears that PyPy already has an identity dict,  so how are they helped by adding one to the collections module?

* Most of the posted examples already work with regular dicts (which check identity before they check equality) -- don't the other implementations already implement regular dicts which need to have identity-implied-equality in order to pass the test suite?  I would expect the following snippet to work under all versions and implementations of Python:

    >>> class A: 
    ...         pass
    >>> a = A()
    >>> d = {a: 10}
    >>> assert d[a] == 10   # uses a's identity for lookup

* Is the proposal something needed for all implementations or is it just an optimization for a particular, non-CPython implementation?


Raymond



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100531/a311aef1/attachment.html>

From jh at improva.dk  Tue Jun  1 09:52:44 2010
From: jh at improva.dk (Jacob Holm)
Date: Tue, 01 Jun 2010 09:52:44 +0200
Subject: [Python-ideas] An identity dict
In-Reply-To: <A13D6A8C-C5EE-4AC7-A419-85CEC08F981A@gmail.com>
References: <loom.20100530T052013-34@post.gmane.org>	<htta0r$9l5$1@dough.gmane.org>	<loom.20100530T162050-351@post.gmane.org>	<4C030321.4050803@canterbury.ac.nz>	<89E5DB78-B304-4A9F-B140-96888B2FCCC7@gmail.com>	<loom.20100601T024511-751@post.gmane.org>
	<A13D6A8C-C5EE-4AC7-A419-85CEC08F981A@gmail.com>
Message-ID: <4C04BC4C.2030103@improva.dk>

On 2010-06-01 03:23, Raymond Hettinger wrote:
> 
> On May 31, 2010, at 5:45 PM, Benjamin Peterson wrote:
> 
>> Essentially these are places where defined equality should not matter.

"should not matter" is the important part here.  It might have been
clearer to say "should be ignored" instead.  I think Raymond is
misunderstanding it.


> Essentially, these are cases where an identity dictionary isn't
> necessary and would in-fact be worse performance-wise
> in every implementation except for PyPy which can compile
> the pure python code for indentity_dict.py.

It is necessary, because the objects involved might define their own
__hash__ and __cmp__/__eq__, and these should *not* be used.


> Sorry, but I think this idea is a total waste.  Perhaps post it as
> a recipe, but it doesn't make sense to try to inject it into the
> standard library.

I don't think it is a total waste, but I have seen two ideas in this
thread that I find more generally useful.  One is
"collections.keyfuncdict", which could be trivially used as an
identitydict.  The other is a WeakIdentityDict, which is a WeakKeyDict
that uses only the identity of the keys for hashing/equality.  These two
are independent, one cannot be used to implement the other (unless
collections.keyfuncdict grows an option to not keep strong refs to the
keys, perhaps by providing the inverse keyfunc instead).

Anyway, +0.1 on identitydict and +1 on each of collection.keyfuncdict
and WeakIdentityDict.

- Jacob


From ziade.tarek at gmail.com  Tue Jun  1 10:54:27 2010
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Tue, 1 Jun 2010 10:54:27 +0200
Subject: [Python-ideas] stdlib upgrades
Message-ID: <AANLkTin_CD2aZ2xlgo4Uh_MPYQ7ajFMqOwt-L6gOR_nR@mail.gmail.com>

Hello,

That's not a new idea, but I'd like to throw it here again.

Some modules/packages in the stdlib are pretty isolated, which means
that they could be upgraded with no
harm, independently from the rest. For example the unittest package,
or the email package.

Here's an idea:

1 - add a version number in each package or module of the stdlib that
is potentially upgradable

2 - create standalone releases of these modules/packages at PyPI, in a
restricted area 'stdlib upgrades'
     that can be used only by core devs to upload new versions. Each
release lists the precise
     Python versions it's compatible with.

3 - once distutils2 is back in the stdlib, provide a command line
interface to list upgradable packages, and make
     it possible to upgrade them

4 - an upgraded package lands in a new specific site-packages
directory and is loaded *before* the one in Lib

Regards
Tarek

-- 
Tarek Ziad? | http://ziade.org


From dickinsm at gmail.com  Tue Jun  1 11:00:23 2010
From: dickinsm at gmail.com (Mark Dickinson)
Date: Tue, 1 Jun 2010 10:00:23 +0100
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com>
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
Message-ID: <AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>

On Sun, May 30, 2010 at 9:28 AM, Masklinn <masklinn at masklinn.net> wrote:
>
> datetime does have a bunch of issues and limitations which I believe soon
> become harmful when doing serious date/calendaring works (which I don't
> claim to do, but I've seen colleagues in serious trouble due to both
> personal lack of knowledge in the field and issues with datetime itself):
> it only supports a gregoriany calendar for instance, it's horrendous
> in dealing with timezones, some methods are pretty much broken,
> constructor refuses "24" as an hour value, blows up on positive leap
> seconds)?

Are there tracker issues open for all these problems?  It not, please
would you consider opening some?  The datetime module has recently
been getting a lot more attention than it used to, thanks largely to
the efforts of Alexander Belopolsky, so I think opening relevant
tracker issues would be worthwhile.

Some of the issues you mention look like easy fixes (e.g., allowing
positive leap seconds, allowing '24:00:00' as a valid time).  The API
problems for timezones look a little bit more serious.

What are the use-cases for non-Gregorian calendars, and why do you
think the datetime module should support them?  This seems like a
specialist need to me.

And which methods are 'pretty much broken'?

If you want to see progress on these issues, please do open some
bugtracker issues.  Or if open issues already exist, it might be worth
pinging them.

Mark


From guido at python.org  Tue Jun  1 16:08:50 2010
From: guido at python.org (Guido van Rossum)
Date: Tue, 1 Jun 2010 07:08:50 -0700
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com> 
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com> 
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>
Message-ID: <AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>

On Tue, Jun 1, 2010 at 2:00 AM, Mark Dickinson <dickinsm at gmail.com> wrote:
> On Sun, May 30, 2010 at 9:28 AM, Masklinn <masklinn at masklinn.net> wrote:
>>
>> datetime does have a bunch of issues and limitations which I believe soon
>> become harmful when doing serious date/calendaring works (which I don't
>> claim to do, but I've seen colleagues in serious trouble due to both
>> personal lack of knowledge in the field and issues with datetime itself):
>> it only supports a gregoriany calendar for instance, it's horrendous
>> in dealing with timezones, some methods are pretty much broken,
>> constructor refuses "24" as an hour value, blows up on positive leap
>> seconds)?
>
> Are there tracker issues open for all these problems? ?It not, please
> would you consider opening some? ?The datetime module has recently
> been getting a lot more attention than it used to, thanks largely to
> the efforts of Alexander Belopolsky, so I think opening relevant
> tracker issues would be worthwhile.
>
> Some of the issues you mention look like easy fixes (e.g., allowing
> positive leap seconds, allowing '24:00:00' as a valid time).

Whoa, the datetime module was explicitly designed not to support leap
seconds. This matches the POSIX standard for timestamps, which,
although commonly explained as "seconds since 1/1/1970 UTC" doesn't
count leap seconds either (it would make the conversions between
timestamps and date/time objects horribly complicated since leap
seconds are not determined by an algorithm). This is all intentional,
since leap seconds are designed to be ignorable by most people except
a few communities like astronomers, who have their own clocks.

> The API problems for timezones look a little bit more serious.

Isn't the main problem that no timezone implementations are provided
by the standard library? There is a reason for that too (although we
should at least have UTC in the stdlib).

> What are the use-cases for non-Gregorian calendars, and why do you
> think the datetime module should support them? ?This seems like a
> specialist need to me.

I believe the main use case is compatibility with Java, which does
support other calendars. Not a big motivation for me. :-)

> And which methods are 'pretty much broken'?
>
> If you want to see progress on these issues, please do open some
> bugtracker issues. ?Or if open issues already exist, it might be worth
> pinging them.

In general I would hesitate about attempts to "fix" "problems" with
the datetime module that were carefully considered API properties when
the design was first made.

The only problems that I currently take seriously are issues with
dates before 1900, which IIRC stem from reliance on C stdlib functions
for manipulating time structs.

-- 
--Guido van Rossum (python.org/~guido)


From dickinsm at gmail.com  Tue Jun  1 16:41:10 2010
From: dickinsm at gmail.com (Mark Dickinson)
Date: Tue, 1 Jun 2010 15:41:10 +0100
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com>
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>
Message-ID: <AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com>

On Tue, Jun 1, 2010 at 3:08 PM, Guido van Rossum <guido at python.org> wrote:
> On Tue, Jun 1, 2010 at 2:00 AM, Mark Dickinson <dickinsm at gmail.com> wrote:
>> Some of the issues you mention look like easy fixes (e.g., allowing
>> positive leap seconds, allowing '24:00:00' as a valid time).
>
> Whoa, the datetime module was explicitly designed not to support leap
> seconds. This matches the POSIX standard for timestamps, which,
> although commonly explained as "seconds since 1/1/1970 UTC" doesn't
> count leap seconds either (it would make the conversions between
> timestamps and date/time objects horribly complicated since leap
> seconds are not determined by an algorithm). This is all intentional,
> since leap seconds are designed to be ignorable by most people except
> a few communities like astronomers, who have their own clocks.

Yes, I understand these issues: UTC is not POSIX time.  By 'support
for leap seconds', all I meant (and all I was assuming Masklinn meant)
was that it would be helpful for e.g., datetime.datetime(1985, 6, 30,
23, 59, 60) to be accepted, rather producing a ValueError as it
currently does:

>>> datetime.datetime(1985, 6, 30, 23, 59, 60)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: second must be in 0..59

As per the POSIX standard (IIUC), that would be immediately converted
to datetime.datetime(1985, 7, 1, 0, 0, 0) internally.  So the datetime
object itself wouldn't support leap seconds, and would continue to use
POSIX time;  only the constructor would support leap seconds.

Similar comments apply to accepting a time of 24:00:00 (and converting
it internally to 00:00:00 on the following day).

Mark


From jnoller at gmail.com  Tue Jun  1 16:46:48 2010
From: jnoller at gmail.com (Jesse Noller)
Date: Tue, 1 Jun 2010 10:46:48 -0400
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTin_CD2aZ2xlgo4Uh_MPYQ7ajFMqOwt-L6gOR_nR@mail.gmail.com>
References: <AANLkTin_CD2aZ2xlgo4Uh_MPYQ7ajFMqOwt-L6gOR_nR@mail.gmail.com>
Message-ID: <AANLkTikbgeXYippkWCP8O3ULPnAd9HoOnCSfYIwWdla-@mail.gmail.com>

On Tue, Jun 1, 2010 at 4:54 AM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
> Hello,
>
> That's not a new idea, but I'd like to throw it here again.
>
> Some modules/packages in the stdlib are pretty isolated, which means
> that they could be upgraded with no
> harm, independently from the rest. For example the unittest package,
> or the email package.
>
> Here's an idea:
>
> 1 - add a version number in each package or module of the stdlib that
> is potentially upgradable
>
> 2 - create standalone releases of these modules/packages at PyPI, in a
> restricted area 'stdlib upgrades'
> ? ? that can be used only by core devs to upload new versions. Each
> release lists the precise
> ? ? Python versions it's compatible with.
>
> 3 - once distutils2 is back in the stdlib, provide a command line
> interface to list upgradable packages, and make
> ? ? it possible to upgrade them
>
> 4 - an upgraded package lands in a new specific site-packages
> directory and is loaded *before* the one in Lib
>
> Regards
> Tarek


I dislike this more than I thought I would - I would rather have the
stdlib broken out from core and have it have more releases than the
whole of python then allowing for piecemeal "blessed" upgrades.
Allowing piecemeal upgrades of the stdlib means you have to say
something akin to:

"I support Python 2.6, with the upgraded unittest (2.6.1.3), socket
(2.6.1.2) and multiprocessing modules"

And so on. Sure, API compatibility should be "fine" - but we all know
that there are exceptions to the rule all the time, and that alone is
enough to put the nix on allowing arbitrary upgrades of individual
modules within the standard lib. For package authors, and users, the
simple "I support 2.6" statement is key. For corporations with strict
upgrade checks and verifications, the same applies.

jesse


From alexander.belopolsky at gmail.com  Tue Jun  1 17:07:51 2010
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Tue, 1 Jun 2010 11:07:51 -0400
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com>
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>
Message-ID: <AANLkTilJ8ObNow9umR5MyOfp0HEWO-tVWbDLONSGiwmH@mail.gmail.com>

On Tue, Jun 1, 2010 at 10:08 AM, Guido van Rossum <guido at python.org> wrote:
..
>> Some of the issues you mention look like easy fixes (e.g., allowing
>> positive leap seconds, allowing '24:00:00' as a valid time).
>
> Whoa, the datetime module was explicitly designed not to support leap
> seconds. This matches the POSIX standard for timestamps, which,
> although commonly explained as "seconds since 1/1/1970 UTC" doesn't
> count leap seconds either (it would make the conversions between
> timestamps and date/time objects horribly complicated since leap
> seconds are not determined by an algorithm). This is all intentional,
> since leap seconds are designed to be ignorable by most people except
> a few communities like astronomers, who have their own clocks.
>

The POSIX standard was heavily influenced by the desire to preserve
last century's existing practices.  Its notable that even the leap
year rule was initially specified incorrectly and only fixed in 2001
version.

Here is the post that I find intriguing:
http://www.mail-archive.com/leapsecs at rom.usno.navy.mil/msg00109.html

An excerpt:

"""
In addition these "glued to the table" cards, there were a number
of unfortunate attitudes:

    "Don't confuse people with UTC.  Everyone uses GMT and knows
    what it means".

    "Lets not complicate things by worrying about the fact that
    the year 2100 is not a leap year."

    "You mean the year 2000 is, but 2100 is not a leap year?"

    "Everyone knows there are only 60 seconds in a minute."

    "I'm lucky if my system's clock is accurate to the minute, so
     I could care less about sometime as small as a leap second".

    "It takes hours, sometime days, for my EMail message to
     reach most people.  Why should I worry about something as
     small as a second?"

    "What matters to me is just that POSIX systems produce the
     same ctime(3) string (i.e., Wed Jun 30 21:49:08 1993\n") when
     given the same time(2) time-stamp."

    "SI?  TAI?  UT1?  I'm having trouble with using UTC instead
     of good old GMT!".
"""

Systems that are aware of leap seconds are not that uncommon.  BSD
derivatives including Mac OS X have time2posix() and posix2time()
functions.  NTP distributes leap seconds notifications.  Any system
that takes time from a GPS source needs to make a leap second
translation.

I think what Mark meant by "easy fixes" was not leap second aware
timestamp to datetime and back translations or datetime arithmetics,
but instead just the ability to store 23:59:60 in time/datetime
object.  This would allow leap second aware applications to use
standard objects to store time and implement arithmetics as correction
to the standard datetime arithmetics.  This is much easier than to
reimplement  the entire datetime module from scratch.


From alexander.belopolsky at gmail.com  Tue Jun  1 17:23:55 2010
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Tue, 1 Jun 2010 11:23:55 -0400
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com>
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>
	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com>
Message-ID: <AANLkTikMlxL-JNM8WOO43xanPjNICN91iRZOFDrOKyJo@mail.gmail.com>

On Tue, Jun 1, 2010 at 10:41 AM, Mark Dickinson <dickinsm at gmail.com> wrote:
..
> As per the POSIX standard (IIUC), [datetime(1985, 6, 30, 23, 59, 60)] would be
> immediately converted
> to datetime.datetime(1985, 7, 1, 0, 0, 0) internally. ?So the datetime
> object itself wouldn't support leap seconds, and would continue to use
> POSIX time; ?only the constructor would support leap seconds.
>

It is my understanding that POSIX mandates that mktime() function
normalizes the tm structure and therefore converts (1985, 6, 30, 23,
59, 60, ...) to (1985, 7, 1, 0, 0, 0, ...).  It is not quite accurate
to say that tm structure is converted "immediately".  It is perfectly
legal to pass around non-normalized tm structures and have for example
utc2gps() function that would produce different values for Y-M-D
23:59:60 and T-M-[D+1] 00:00:00.

I would prefer a similar behavior for datetime constructor:

>>> datetime(1985, 6, 30, 23, 59, 60).second
60
>>> datetime(1985, 6, 30, 23, 59, 60).timetuple()
(1985, 6, 30, 23, 59, 60, ...)

but
>>> datetime(1985, 6, 30, 23, 59, 60) - datetime(1985, 7, 1, 0, 0, 0)
datetime.timedelta(0)


From guido at python.org  Tue Jun  1 18:07:36 2010
From: guido at python.org (Guido van Rossum)
Date: Tue, 1 Jun 2010 09:07:36 -0700
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com> 
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com> 
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com> 
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com> 
	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com>
Message-ID: <AANLkTilv-hVCPKPpkZvJKRkBVeWnxHW2_Csodonet4f4@mail.gmail.com>

On Tue, Jun 1, 2010 at 7:41 AM, Mark Dickinson <dickinsm at gmail.com> wrote:
> On Tue, Jun 1, 2010 at 3:08 PM, Guido van Rossum <guido at python.org> wrote:
>> On Tue, Jun 1, 2010 at 2:00 AM, Mark Dickinson <dickinsm at gmail.com> wrote:
>>> Some of the issues you mention look like easy fixes (e.g., allowing
>>> positive leap seconds, allowing '24:00:00' as a valid time).
>>
>> Whoa, the datetime module was explicitly designed not to support leap
>> seconds. This matches the POSIX standard for timestamps, which,
>> although commonly explained as "seconds since 1/1/1970 UTC" doesn't
>> count leap seconds either (it would make the conversions between
>> timestamps and date/time objects horribly complicated since leap
>> seconds are not determined by an algorithm). This is all intentional,
>> since leap seconds are designed to be ignorable by most people except
>> a few communities like astronomers, who have their own clocks.
>
> Yes, I understand these issues: UTC is not POSIX time. ?By 'support
> for leap seconds', all I meant (and all I was assuming Masklinn meant)
> was that it would be helpful for e.g., datetime.datetime(1985, 6, 30,
> 23, 59, 60) to be accepted, rather producing a ValueError as it
> currently does:
>
>>>> datetime.datetime(1985, 6, 30, 23, 59, 60)
> Traceback (most recent call last):
> ?File "<stdin>", line 1, in <module>
> ValueError: second must be in 0..59
>
> As per the POSIX standard (IIUC), that would be immediately converted
> to datetime.datetime(1985, 7, 1, 0, 0, 0) internally. ?So the datetime
> object itself wouldn't support leap seconds, and would continue to use
> POSIX time; ?only the constructor would support leap seconds.
>
> Similar comments apply to accepting a time of 24:00:00 (and converting
> it internally to 00:00:00 on the following day).

What's the use case for these relaxations in argument range checking?
I'd say they are more confusing, since they might lead one to suspect
that leap seconds are in fact supported.

-- 
--Guido van Rossum (python.org/~guido)


From guido at python.org  Tue Jun  1 18:17:53 2010
From: guido at python.org (Guido van Rossum)
Date: Tue, 1 Jun 2010 09:17:53 -0700
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTikMlxL-JNM8WOO43xanPjNICN91iRZOFDrOKyJo@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com> 
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com> 
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com> 
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com> 
	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com> 
	<AANLkTikMlxL-JNM8WOO43xanPjNICN91iRZOFDrOKyJo@mail.gmail.com>
Message-ID: <AANLkTikNcigdM2bPmxZFQs8lPB7yotEhLOPqzyzAFknA@mail.gmail.com>

On Tue, Jun 1, 2010 at 8:23 AM, Alexander Belopolsky
<alexander.belopolsky at gmail.com> wrote:
> On Tue, Jun 1, 2010 at 10:41 AM, Mark Dickinson <dickinsm at gmail.com> wrote:
> ..
>> As per the POSIX standard (IIUC), [datetime(1985, 6, 30, 23, 59, 60)] would be
>> immediately converted
>> to datetime.datetime(1985, 7, 1, 0, 0, 0) internally. ?So the datetime
>> object itself wouldn't support leap seconds, and would continue to use
>> POSIX time; ?only the constructor would support leap seconds.
>>
>
> It is my understanding that POSIX mandates that mktime() function
> normalizes the tm structure and therefore converts (1985, 6, 30, 23,
> 59, 60, ...) to (1985, 7, 1, 0, 0, 0, ...). ?It is not quite accurate
> to say that tm structure is converted "immediately". ?It is perfectly
> legal to pass around non-normalized tm structures and have for example
> utc2gps() function that would produce different values for Y-M-D
> 23:59:60 and T-M-[D+1] 00:00:00.
>
> I would prefer a similar behavior for datetime constructor:
>
>>>> datetime(1985, 6, 30, 23, 59, 60).second
> 60
>>>> datetime(1985, 6, 30, 23, 59, 60).timetuple()
> (1985, 6, 30, 23, 59, 60, ...)
>
> but
>>>> datetime(1985, 6, 30, 23, 59, 60) - datetime(1985, 7, 1, 0, 0, 0)
> datetime.timedelta(0)

I expect this will cause a lot of subtle issues. E.g. What should
comparison of an unnormalized datetime value to an equivalent
normalized datetime value yield? How far will you go? Is
datetime.datetime(2010, 6, 1, 36, 0, 0) a way of spelling
datetime.datetime(2010, 6, 2, 12, 0 0) ? How do you force
normalization? Won't it break apps if the .seconds attribute can be
out of range or if normalization calls need to be inserted?

The datetime module was written with "commercial" and everyday use in
mind. In such use, there is no need to carry leap seconds around.

-- 
--Guido van Rossum (python.org/~guido)


From dickinsm at gmail.com  Tue Jun  1 18:28:32 2010
From: dickinsm at gmail.com (Mark Dickinson)
Date: Tue, 1 Jun 2010 17:28:32 +0100
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTilv-hVCPKPpkZvJKRkBVeWnxHW2_Csodonet4f4@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com>
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>
	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com>
	<AANLkTilv-hVCPKPpkZvJKRkBVeWnxHW2_Csodonet4f4@mail.gmail.com>
Message-ID: <AANLkTikLZ-UnYtRywbrzppMbtHPumxVqjLv-30KIwY4G@mail.gmail.com>

On Tue, Jun 1, 2010 at 5:07 PM, Guido van Rossum <guido at python.org> wrote:
> What's the use case for these relaxations in argument range checking?
> I'd say they are more confusing, since they might lead one to suspect
> that leap seconds are in fact supported.

For the first, it would prevent tuples corresponding to valid UTC
times (or local times) causing an exception in the datetime
constructor.  I don't have any specific use-cases, but it's not hard
to imagine passing a tuple from some external UTC-supporting source to
datetime.datetime.

The second relaxation (allowing 24:00:00) comes from ISO 8601, but I
don't really know how widespread its use is.  I admit I don't find
this one particularly convincing;  perhaps Masklinn can expand on why
it's useful.

Mark


From guido at python.org  Tue Jun  1 18:29:18 2010
From: guido at python.org (Guido van Rossum)
Date: Tue, 1 Jun 2010 09:29:18 -0700
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTilJ8ObNow9umR5MyOfp0HEWO-tVWbDLONSGiwmH@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com> 
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com> 
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com> 
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com> 
	<AANLkTilJ8ObNow9umR5MyOfp0HEWO-tVWbDLONSGiwmH@mail.gmail.com>
Message-ID: <AANLkTilQdEqX9lgKkBk5DnSLNrvxNVa7_es3Uy06n8zM@mail.gmail.com>

On Tue, Jun 1, 2010 at 8:07 AM, Alexander Belopolsky
<alexander.belopolsky at gmail.com> wrote:
> On Tue, Jun 1, 2010 at 10:08 AM, Guido van Rossum <guido at python.org> wrote:
> ..
>>> Some of the issues you mention look like easy fixes (e.g., allowing
>>> positive leap seconds, allowing '24:00:00' as a valid time).
>>
>> Whoa, the datetime module was explicitly designed not to support leap
>> seconds. This matches the POSIX standard for timestamps, which,
>> although commonly explained as "seconds since 1/1/1970 UTC" doesn't
>> count leap seconds either (it would make the conversions between
>> timestamps and date/time objects horribly complicated since leap
>> seconds are not determined by an algorithm). This is all intentional,
>> since leap seconds are designed to be ignorable by most people except
>> a few communities like astronomers, who have their own clocks.
>>
>
> The POSIX standard was heavily influenced by the desire to preserve
> last century's existing practices.

I don't expect this century's practices will change much. Show me a a
labor contract with a provision to pay for work during leap seconds
and I might change my mind.

> Its notable that even the leap
> year rule was initially specified incorrectly and only fixed in 2001
> version.

I don't see how that's an argument for supporting leap seconds. The
change here is really about the expectation of the lifetime of
software systems, not unlike what caused Y2K.

> Here is the post that I find intriguing:
> http://www.mail-archive.com/leapsecs at rom.usno.navy.mil/msg00109.html

A rants by someone with a grudge.

> An excerpt:
>
> """
> In addition these "glued to the table" cards, there were a number
> of unfortunate attitudes:
>
> ? ?"Don't confuse people with UTC. ?Everyone uses GMT and knows
> ? ?what it means".
>
> ? ?"Lets not complicate things by worrying about the fact that
> ? ?the year 2100 is not a leap year."
>
> ? ?"You mean the year 2000 is, but 2100 is not a leap year?"
>
> ? ?"Everyone knows there are only 60 seconds in a minute."
>
> ? ?"I'm lucky if my system's clock is accurate to the minute, so
> ? ? I could care less about sometime as small as a leap second".
>
> ? ?"It takes hours, sometime days, for my EMail message to
> ? ? reach most people. ?Why should I worry about something as
> ? ? small as a second?"
>
> ? ?"What matters to me is just that POSIX systems produce the
> ? ? same ctime(3) string (i.e., Wed Jun 30 21:49:08 1993\n") when
> ? ? given the same time(2) time-stamp."
>
> ? ?"SI? ?TAI? ?UT1? ?I'm having trouble with using UTC instead
> ? ? of good old GMT!".
> """

He throws ripe and green comments together in a way to make it sound
as if not knowing the Gregorian leap year rule is of the same
magnitude as not caring about leap seconds or TAI.

> Systems that are aware of leap seconds are not that uncommon. ?BSD
> derivatives including Mac OS X have time2posix() and posix2time()
> functions. ?NTP distributes leap seconds notifications. ?Any system
> that takes time from a GPS source needs to make a leap second
> translation.

That's a solved problem though, is it? The accounting for leap seconds
properly belongs in the layers closest to NTP / GPS.

All other software running in the typical computer (even Google's
servers) uses interfaces that use POSIX timestamps (albeit often with
fractions of a second supportd) or something logically equivalent.

> I think what Mark meant by "easy fixes" was not leap second aware
> timestamp to datetime and back translations or datetime arithmetics,
> but instead just the ability to store 23:59:60 in time/datetime
> object. ?This would allow leap second aware applications to use
> standard objects to store time and implement arithmetics as correction
> to the standard datetime arithmetics. ?This is much easier than to
> reimplement ?the entire datetime module from scratch.

Tell us about the use case.

Note that if you're talking about times in the future (a very useful
use case for the datetime module) you *can't* account for leap seconds
since it is not known (far) ahead when they will be.

-- 
--Guido van Rossum (python.org/~guido)


From guido at python.org  Tue Jun  1 18:36:42 2010
From: guido at python.org (Guido van Rossum)
Date: Tue, 1 Jun 2010 09:36:42 -0700
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTikLZ-UnYtRywbrzppMbtHPumxVqjLv-30KIwY4G@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com> 
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com> 
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com> 
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com> 
	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com> 
	<AANLkTilv-hVCPKPpkZvJKRkBVeWnxHW2_Csodonet4f4@mail.gmail.com> 
	<AANLkTikLZ-UnYtRywbrzppMbtHPumxVqjLv-30KIwY4G@mail.gmail.com>
Message-ID: <AANLkTimrElq63LgFaDyRGqyKVwN8SrU-eSLONSSOy_KR@mail.gmail.com>

On Tue, Jun 1, 2010 at 9:28 AM, Mark Dickinson <dickinsm at gmail.com> wrote:
> On Tue, Jun 1, 2010 at 5:07 PM, Guido van Rossum <guido at python.org> wrote:
>> What's the use case for these relaxations in argument range checking?
>> I'd say they are more confusing, since they might lead one to suspect
>> that leap seconds are in fact supported.
>
> For the first, it would prevent tuples corresponding to valid UTC
> times (or local times) causing an exception in the datetime
> constructor. ?I don't have any specific use-cases, but it's not hard
> to imagine passing a tuple from some external UTC-supporting source to
> datetime.datetime.

Imagined use cases are just that.

> The second relaxation (allowing 24:00:00) comes from ISO 8601, but I
> don't really know how widespread its use is. ?I admit I don't find
> this one particularly convincing; ?perhaps Masklinn can expand on why
> it's useful.

This I can understand, but more for output than for input. It is
useful to specify the end time of an event (e.g. a party) ending at
midnight as ending at 24:00 on a given date rather than at 00:00 on
the next day, since that might confuse humans.

-- 
--Guido van Rossum (python.org/~guido)


From janssen at parc.com  Tue Jun  1 19:03:44 2010
From: janssen at parc.com (Bill Janssen)
Date: Tue, 1 Jun 2010 10:03:44 PDT
Subject: [Python-ideas] An identity dict
In-Reply-To: <20100530132047.39a5875a@o>
References: <loom.20100530T052013-34@post.gmane.org>
	<20100530132047.39a5875a@o>
Message-ID: <50046.1275411824@parc.com>

Denis, if you're going to post to python-ideas, would you mind taking
that biohazard symbol out of your user name?  My Emacs-based mail reader
thrashes for quite a while trying to find a glyph for it before it gives
up and renders it as a hollow rectangle.

I'd normally just add you to my kill-file, but I hate to give up on
python-ideas people that fast.  I'm sure you're not really a biohazard :-).

Bill


From alexander.belopolsky at gmail.com  Tue Jun  1 19:28:21 2010
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Tue, 1 Jun 2010 13:28:21 -0400
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTimrElq63LgFaDyRGqyKVwN8SrU-eSLONSSOy_KR@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com>
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>
	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com>
	<AANLkTilv-hVCPKPpkZvJKRkBVeWnxHW2_Csodonet4f4@mail.gmail.com>
	<AANLkTikLZ-UnYtRywbrzppMbtHPumxVqjLv-30KIwY4G@mail.gmail.com>
	<AANLkTimrElq63LgFaDyRGqyKVwN8SrU-eSLONSSOy_KR@mail.gmail.com>
Message-ID: <AANLkTilRp-VZ6gQir4tVbXNGcZ-jWBHPBikkTDTyQSps@mail.gmail.com>

On Tue, Jun 1, 2010 at 12:36 PM, Guido van Rossum <guido at python.org> wrote:
> On Tue, Jun 1, 2010 at 9:28 AM, Mark Dickinson <dickinsm at gmail.com> wrote:
..
>> For the first, it would prevent tuples corresponding to valid UTC
>> times (or local times) causing an exception in the datetime
>> constructor. ?I don't have any specific use-cases, but it's not hard
>> to imagine passing a tuple from some external UTC-supporting source to
>> datetime.datetime.
>
> Imagined use cases are just that.

Developers writing generic libraries have to deal with imagined use
cases all the time.  If I write an rfc3339 timestamp parser, I cannot
ignore the fact that XXXX-12-31T23:59:60Z is a valid timestamp.  If I
do, I cannot claim that my parser implements rfc3339.  An application
that uses python datetime objects to represent time may crash parsing
logs produced in December 2008 on the systems that keeps time in UTC.

If all my application does is to read timestamps from some source,
store them in the database and display them on a later date, I don't
want to worry that it will crash when presented with 23:59:60.

Of course, allowing leap seconds in time/datetime constructor may be a
way to delay detection of a bug.  An application may accept
XXXX-12-31T23:59:60Z, but later rely on the fact that dt1-dt2 ==
timedelta(0) implies dt1 == dt2.   Such issues, if exist, can be
addressed by the application without replacing datetime object as a
means of storing timestamps.  On the other hand the current
restriction in the constructor makes datetime fundamentally
incompatible with a number of standards.

PS: I believe systems capable of producing 23:59:60 in timestamps are
already more common than those that don't use IEEE standard for
floating point values.  Nevertheless, CPython contains a lot of code
designed to deal with imagined deviations from IEEE 754.


From cool-rr at cool-rr.com  Tue Jun  1 19:36:10 2010
From: cool-rr at cool-rr.com (cool-RR)
Date: Tue, 1 Jun 2010 19:36:10 +0200
Subject: [Python-ideas] Having unbound methods refer to the classes their
	defined on
Message-ID: <AANLkTin5VQV07UvGwBSYgYYxf8kpnLfAQ1HrfpG8Ezes@mail.gmail.com>

Hello,

I would like to raise an issue here that I've been discussing at
python-porting.

(And I'd like to preface by saying that I'm not intimately familiar with
Python's innards, so if I make any mistakes please correct me.)

In Python 2.x there was an "unbound method" type. An unbound method would
have an attribute `.im_class` that would refer to the class on which the
method was defined. This allowed users to use the `copy_reg` module to
pickle unbound methods by name. (In a similar way to how functions and
classes are pickled by default.)

In Python 3.x unbound methods are plain functions. There is no way of
knowing on which class they are defined, so therefore it's impossible to
pickle them. It is even impossible to tell `copyreg` to use a custom
reducer:
http://stackoverflow.com/questions/2932742/python-using-copyreg-to-define-reducers-for-types-that-already-have-reducers

(To the people who wonder why would anyone want to pickle unbound methods: I
know that it sounds like a weird thing to do. Keep in mind that sometimes
your objects need to get pickled. For example if you're using the
multiprocessing module, and you pass into it an object that somehow refers
to an unbound method, then that method has to be picklable.)

The idea is: Let's give unbound methods an attribute that will refer to the
class on which they were defined.

What do you think?


Ram.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100601/7a1e517b/attachment.html>

From python at mrabarnett.plus.com  Tue Jun  1 19:44:23 2010
From: python at mrabarnett.plus.com (MRAB)
Date: Tue, 01 Jun 2010 18:44:23 +0100
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTikMlxL-JNM8WOO43xanPjNICN91iRZOFDrOKyJo@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com>	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com>
	<AANLkTikMlxL-JNM8WOO43xanPjNICN91iRZOFDrOKyJo@mail.gmail.com>
Message-ID: <4C0546F7.8040404@mrabarnett.plus.com>

Alexander Belopolsky wrote:
> On Tue, Jun 1, 2010 at 10:41 AM, Mark Dickinson <dickinsm at gmail.com> wrote:
> ..
>> As per the POSIX standard (IIUC), [datetime(1985, 6, 30, 23, 59, 60)] would be
>> immediately converted
>> to datetime.datetime(1985, 7, 1, 0, 0, 0) internally.  So the datetime
>> object itself wouldn't support leap seconds, and would continue to use
>> POSIX time;  only the constructor would support leap seconds.
>>
> 
> It is my understanding that POSIX mandates that mktime() function
> normalizes the tm structure and therefore converts (1985, 6, 30, 23,
> 59, 60, ...) to (1985, 7, 1, 0, 0, 0, ...).  It is not quite accurate
> to say that tm structure is converted "immediately".  It is perfectly
> legal to pass around non-normalized tm structures and have for example
> utc2gps() function that would produce different values for Y-M-D
> 23:59:60 and T-M-[D+1] 00:00:00.
> 
> I would prefer a similar behavior for datetime constructor:
> 
>>>> datetime(1985, 6, 30, 23, 59, 60).second
> 60
>>>> datetime(1985, 6, 30, 23, 59, 60).timetuple()
> (1985, 6, 30, 23, 59, 60, ...)
> 
> but
>>>> datetime(1985, 6, 30, 23, 59, 60) - datetime(1985, 7, 1, 0, 0, 0)
> datetime.timedelta(0)
> 
Actually, that's wrong because there was a leap second. The clock went:

     1985-06-30 23:59:59
     1985-06-30 23:59:60
     1985-07-01 00:00:00

The following year, however, it went:

     1986-06-30 23:59:59
     1986-07-01 00:00:00


From marcos.bonci at gmail.com  Tue Jun  1 19:47:12 2010
From: marcos.bonci at gmail.com (Marcos Bonci)
Date: Tue, 1 Jun 2010 14:47:12 -0300
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTikNcigdM2bPmxZFQs8lPB7yotEhLOPqzyzAFknA@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com> 
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com> 
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com> 
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com> 
	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com> 
	<AANLkTikMlxL-JNM8WOO43xanPjNICN91iRZOFDrOKyJo@mail.gmail.com> 
	<AANLkTikNcigdM2bPmxZFQs8lPB7yotEhLOPqzyzAFknA@mail.gmail.com>
Message-ID: <AANLkTilqhoryrfpDustYMdcTy29hnXnQ_Ow_Ng1xhlNj@mail.gmail.com>

On 1 June 2010 13:17, Guido van Rossum <guido at python.org> wrote:

> On Tue, Jun 1, 2010 at 8:23 AM, Alexander Belopolsky
> <alexander.belopolsky at gmail.com> wrote:
> > On Tue, Jun 1, 2010 at 10:41 AM, Mark Dickinson <dickinsm at gmail.com>
> wrote:
> > ..
> >> As per the POSIX standard (IIUC), [datetime(1985, 6, 30, 23, 59, 60)]
> would be
> >> immediately converted
> >> to datetime.datetime(1985, 7, 1, 0, 0, 0) internally.  So the datetime
> >> object itself wouldn't support leap seconds, and would continue to use
> >> POSIX time;  only the constructor would support leap seconds.
> >>
> >
> > It is my understanding that POSIX mandates that mktime() function
> > normalizes the tm structure and therefore converts (1985, 6, 30, 23,
> > 59, 60, ...) to (1985, 7, 1, 0, 0, 0, ...).  It is not quite accurate
> > to say that tm structure is converted "immediately".  It is perfectly
> > legal to pass around non-normalized tm structures and have for example
> > utc2gps() function that would produce different values for Y-M-D
> > 23:59:60 and T-M-[D+1] 00:00:00.
> >
> > I would prefer a similar behavior for datetime constructor:
> >
> >>>> datetime(1985, 6, 30, 23, 59, 60).second
> > 60
> >>>> datetime(1985, 6, 30, 23, 59, 60).timetuple()
> > (1985, 6, 30, 23, 59, 60, ...)
> >
> > but
> >>>> datetime(1985, 6, 30, 23, 59, 60) - datetime(1985, 7, 1, 0, 0, 0)
> > datetime.timedelta(0)
>
> I expect this will cause a lot of subtle issues. E.g. What should
> comparison of an unnormalized datetime value to an equivalent
> normalized datetime value yield? How far will you go? Is
> datetime.datetime(2010, 6, 1, 36, 0, 0) a way of spelling
> datetime.datetime(2010, 6, 2, 12, 0 0) ? How do you force
> normalization? Won't it break apps if the .seconds attribute can be
> out of range or if normalization calls need to be inserted?
>
> The datetime module was written with "commercial" and everyday use in
> mind. In such use, there is no need to carry leap seconds around.
>
>
If this is really a design choice, then I guess my suggestions about
date+time literals and a unique/"official" date+time interpretation as
number really aren't good ideas. (Although I can't see why a
precise/scientific approach wouldn't be better than a commercial
one, as commercial applications often rely on precise standards.)

But I still don't understand why datetime.datetime.toordinal returns
an int that truncates time information. Is this deliberate?



>  --
> --Guido van Rossum (python.org/~guido)
>


-- Marcos --
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100601/73afd6c8/attachment.html>

From alexander.belopolsky at gmail.com  Tue Jun  1 20:10:17 2010
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Tue, 1 Jun 2010 14:10:17 -0400
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTikNcigdM2bPmxZFQs8lPB7yotEhLOPqzyzAFknA@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com>
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>
	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com>
	<AANLkTikMlxL-JNM8WOO43xanPjNICN91iRZOFDrOKyJo@mail.gmail.com>
	<AANLkTikNcigdM2bPmxZFQs8lPB7yotEhLOPqzyzAFknA@mail.gmail.com>
Message-ID: <AANLkTimHmZOjXIKVeH4laoydrMYMEtolRmsBCy9SMkai@mail.gmail.com>

On Tue, Jun 1, 2010 at 12:17 PM, Guido van Rossum <guido at python.org> wrote:
..
> I expect this will cause a lot of subtle issues.

I will try to answer to those.

> E.g. What should
> comparison of an unnormalized datetime value to an equivalent
> normalized datetime value yield?

I am not proposing supporting arbitrary unnormalized datetime values,
only to allow seconds (0 - 60).  I am not proposing any notion of
"equivalent datetime" objects either.  POSIX will require that for  t1
= datetime(1985, 6, 30, 23, 59, 60) and t2 =datetime(1985, 7, 1, 0, 0,
0) time.mktime(t1.timetuple()) == time.mktime(t2.timetuple()), but
this does not mean that t1 and t2 should compare equal.

It is a more subtle issue, what difference t1 - t2 should produce.  I
think it can be defined as difference in corresponding POSIX times.

> How far will you go? Is
> datetime.datetime(2010, 6, 1, 36, 0, 0) a way of spelling
> datetime.datetime(2010, 6, 2, 12, 0 0) ?

I would not go any further than extending seconds to 0-60 range which
is common to many modern standards.

> How do you force
> normalization?

Normalization is never forced.  A round trip through POSIX timestamp
will naturally produce normalized datetime objects.

> Won't it break apps if the .seconds attribute can be
> out of range or if normalization calls need to be inserted?
>

Many standards require that seconds range be 0-60.  Applications that
obtain time from timetuples should already be prepared to handle this
range to be POSIX compliant.   Note that I do not propose changing
internal sources of datetime objects such as datetime.now() to return
dt.seconds == 60.  Therefore all extended range times will originate
outside of the datetime library.  Current application should already
validate such sources before passing them to datetime library.  Of
course an application that relies on constructor throwing an exception
for validation and then asserts that seconds < 60 will break, but this
can be addressed by proper deprecation schedule.  Maybe even starting
with enabling  extended seconds range with a from __future__ import.


From debatem1 at gmail.com  Tue Jun  1 20:12:00 2010
From: debatem1 at gmail.com (geremy condra)
Date: Tue, 1 Jun 2010 11:12:00 -0700
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTin_CD2aZ2xlgo4Uh_MPYQ7ajFMqOwt-L6gOR_nR@mail.gmail.com>
References: <AANLkTin_CD2aZ2xlgo4Uh_MPYQ7ajFMqOwt-L6gOR_nR@mail.gmail.com>
Message-ID: <AANLkTin5asM-pWaDzh_ntQO1fWigUXIx7FidMBbR3I8i@mail.gmail.com>

On Tue, Jun 1, 2010 at 1:54 AM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
> Hello,
>
> That's not a new idea, but I'd like to throw it here again.
>
> Some modules/packages in the stdlib are pretty isolated, which means
> that they could be upgraded with no
> harm, independently from the rest. For example the unittest package,
> or the email package.

What advantage do you see in this relative to, say, breaking off the
stdlib or introducing a sumo addon?

> Here's an idea:
>
> 1 - add a version number in each package or module of the stdlib that
> is potentially upgradable

As in, append it to the module name, or add an interface to modules
to query their version?

> 2 - create standalone releases of these modules/packages at PyPI, in a
> restricted area 'stdlib upgrades'
> ? ? that can be used only by core devs to upload new versions. Each
> release lists the precise
> ? ? Python versions it's compatible with.
>
> 3 - once distutils2 is back in the stdlib, provide a command line
> interface to list upgradable packages, and make
> ? ? it possible to upgrade them

+1 on this for all packages, not just stdlib

> 4 - an upgraded package lands in a new specific site-packages
> directory and is loaded *before* the one in Lib
>
> Regards
> Tarek

Geremy Condra


From tim.peters at gmail.com  Tue Jun  1 20:12:10 2010
From: tim.peters at gmail.com (Tim Peters)
Date: Tue, 1 Jun 2010 14:12:10 -0400
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTilqhoryrfpDustYMdcTy29hnXnQ_Ow_Ng1xhlNj@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com> 
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com> 
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com> 
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com> 
	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com> 
	<AANLkTikMlxL-JNM8WOO43xanPjNICN91iRZOFDrOKyJo@mail.gmail.com> 
	<AANLkTikNcigdM2bPmxZFQs8lPB7yotEhLOPqzyzAFknA@mail.gmail.com> 
	<AANLkTilqhoryrfpDustYMdcTy29hnXnQ_Ow_Ng1xhlNj@mail.gmail.com>
Message-ID: <AANLkTinVqPa72JHLyWAMgGEj3ANZTLBIOxxmFz62tYSE@mail.gmail.com>

[Marcos Bonci]
> ...
> But I still don't understand why?datetime.datetime.toordinal returns
> an int that truncates time information. Is this deliberate?

That it does exactly what it's documented to do is a clue about that ;-)

As the module docs say, the notion of "ordinal" was deliberately
defined in this way:

    This matches the definition of the ?proleptic Gregorian? calendar in
    Dershowitz and Reingold?s book "Calendrical Calculations", where
    it?s the base calendar for all computations. See the book for
    algorithms for converting between proleptic Gregorian ordinals and
    many other calendar systems.

That's the primary use case we had in mind for date & datetime
ordinals.  Indeed, the meaning of "ordinal" is "an integer indicating
position in a sequence".


From ianb at colorstudy.com  Tue Jun  1 20:13:16 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Tue, 1 Jun 2010 13:13:16 -0500
Subject: [Python-ideas] stdlib upgrades
Message-ID: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com>

Threading will probably break here as I wasn't on the list for the first
email...

My concern with the standard library is that there's a couple things going
on:

1. The standard library represents "accepted" functionality, kind of best
practice, kind of just conventional.  Everyone (roughly) knows what you are
talking about when you use things from the standard library.
2. The standard library has some firm backward compatibility guarantees.  It
also has some firm stability guarantees, especially within releases (though
in practice, nearly for eternity).
3. The standard library is kind of collectively owned; it's not up to the
whims of one person, and can't be abandoned.
4. The standard library is one big chunk of functionality, upgraded all
under one version number, and specifically works together (though in
practice cross-module refactorings are uncommon).

There's positive things about these features, but 4 really drives me nuts,
and I think is a strong disincentive to putting stuff into the standard
library.  For packaging I think 4 actively damages maintainability.

Packaging is at the intersection of several systems:

* Python versions
* Forward and backward compatibility with distributed libraries
* System policies (e.g., Debian has changed things around a lot in the last
few years)
* A whole other ecosystem of libraries outside of Python (e.g., binding to C
libraries)
* Various developer toolkits, some Python specific (e.g., Cython) some not
(gcc)

I don't think it's practical to think that we can determine some scope of
packaging where it will be stable in the long term, all these things are
changing and many are changing without any particular concern for how it
affects Python (i.e., packaging must be reactive).  And frankly we clearly
do not have packaging figured out, we're still circling in on something...
and I think the circling will be more like a Strange Attractor than a sink
drain.

The issues exist for other libraries that aren't packaging-related, of
course, it's just worse for packaging.  argparse for instance is not
"done"... it has bugs that won't be fixed before release, and functionality
that it should reasonably include.  But there's no path for it to get
better.  Will it have new and better features in Python 3.3?  Who seriously
wants to write code that is only compatible with Python 3.3+ just because of
some feature in argparse?  Instead everyone will work around argparse as it
currently exists.  In the process they'll probably use undocumented APIs,
further calcifying the library and making future improvements disruptive.

It's not very specific to argparse, I think ElementTree has similar issues.
The json library is fairly unique in that it has a scope that can be
"done".  I don't know what to say about wsgiref... it's completely
irrelevant in Python 3 because it was upgraded along the Python schedule
despite being unready to be released (this is relatively harmless as I don't
think anyone is using wsgiref in Python 3).

So, this is the tension I see.  I think aspects of the standard library
process and its guarantees are useful, but the current process means
releasing code that isn't ready or not releasing code that should be
released, and neither is good practice and both compromise those
guarantees.  Lots of moving versions can indeed be difficult to manage...
though it can be made a lot easier with good practices.  Though even then
distutils2 (and pip) does not even fit into that... they both enter into the
workflow before you start working with libraries and versions, making them
somewhat unique (though also giving them some more flexibility as they are
not so strongly tied to the Python runtime, which is where stability
requirements are most needed).

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100601/53b784df/attachment.html>

From janssen at parc.com  Tue Jun  1 20:32:34 2010
From: janssen at parc.com (Bill Janssen)
Date: Tue, 1 Jun 2010 11:32:34 PDT
Subject: [Python-ideas] lack of time zone support
In-Reply-To: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
Message-ID: <51245.1275417154@parc.com>

To me, the single most irritating problem with the Python support for
date/time is the lack of support for time-zone understanding.  This
breaks down into two major issues, %z and lack of a standard time-zone
table.

First, let's say I have to parse a Medusa log file, which contains time
stamps in the form "DD/Mon/YYYY:HH:MM:SS [+|-]HHMM", e.g.
"31/May/2010:07:10:04 -0800".  What I'd like to write is

  tm = time.mktime(time.strptime(timestamp, "%d/%b/%Y:%H:%M:%S %z"))

which is what I'd do if I was writing in C.  But no!  The Python
_strptime module doesn't support "%z".  So instead, I have to pick the
timestamp apart and do things separately and remember that "-0800" isn't
octal, and also isn't the same as -800, and remember whether to add or
subtract it.  This seems insane.  So, IMO, support for %z should be
added to Lib/_strptime.py.  We need a patch.

Secondly, we really need concrete subclasses of tzinfo, and some sort of
mapping.  Lots of people have spent lots of time trying to figure out
this cryptic hint in datetime: "The datetime module does not supply any
concrete subclasses of tzinfo."  I'm not sure whether pytz is the best
ideas, or what I use, the "zoneinfo" module from python-dateutil.  With
that, I still have to add the Windows timezone names, using the table at
http://unicode.org/repos/cldr/trunk/common/supplemental/windowsZones.xml,
because the code in python-dateutil only works with Windows timezone
names when running on Windows.

Bill


From alexander.belopolsky at gmail.com  Tue Jun  1 20:36:06 2010
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Tue, 1 Jun 2010 14:36:06 -0400
Subject: [Python-ideas] Date/time literals
In-Reply-To: <4C0546F7.8040404@mrabarnett.plus.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com>
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>
	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com>
	<AANLkTikMlxL-JNM8WOO43xanPjNICN91iRZOFDrOKyJo@mail.gmail.com>
	<4C0546F7.8040404@mrabarnett.plus.com>
Message-ID: <AANLkTimWemhZWaEBx-09a5n_ld4Xf6s_QdkZ6L5pVCS7@mail.gmail.com>

On Tue, Jun 1, 2010 at 1:44 PM, MRAB <python at mrabarnett.plus.com> wrote:
..
>> but
>>>>>
>>>>> datetime(1985, 6, 30, 23, 59, 60) - datetime(1985, 7, 1, 0, 0, 0)
>>
>> datetime.timedelta(0)
>>
> Actually, that's wrong because there was a leap second. The clock went:
>
> ? ?1985-06-30 23:59:59
> ? ?1985-06-30 23:59:60
> ? ?1985-07-01 00:00:00
>
> The following year, however, it went:
>
> ? ?1986-06-30 23:59:59
> ? ?1986-07-01 00:00:00

It is only wrong if you expect datetime difference to reflect the
actual duration between the corresponding UTC events.  The datetime
library does not do it even for dates.

For example, on my system

>>> date(1752, 9, 14) - date(1752, 9, 2)
datetime.timedelta(12)

even though calendar application on the same machine shows that
September 14 was the day after September 2 in 1752.

$ cal 9 1752
   September 1752
Su Mo Tu We Th Fr Sa
       1  2 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30

This was a deliberate design choice to implement proleptic calendar
rather than a historically more accurate variant.  Similarly I see
nothing wrong with datetime difference not capturing leap seconds.  An
application interested in leap seconds effects, however should still
be able to use the basic datetime object and define its own duration
functions.


From alexander.belopolsky at gmail.com  Tue Jun  1 20:41:40 2010
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Tue, 1 Jun 2010 14:41:40 -0400
Subject: [Python-ideas] lack of time zone support
In-Reply-To: <51245.1275417154@parc.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
	<51245.1275417154@parc.com>
Message-ID: <AANLkTim6Qckveptm2UhsDLVsfqpiQWtb6qjoCETWrXpK@mail.gmail.com>

On Tue, Jun 1, 2010 at 2:32 PM, Bill Janssen <janssen at parc.com> wrote:
> To me, the single most irritating problem with the Python support for
> date/time is the lack of support for time-zone understanding.

There are two related issues on the tracker:

http://bugs.python.org/issue5094 "datetime lacks concrete tzinfo impl. for UTC"
http://bugs.python.org/issue6641 "strptime doesn't support %z format ?"


From benjamin at python.org  Tue Jun  1 23:17:25 2010
From: benjamin at python.org (Benjamin Peterson)
Date: Tue, 1 Jun 2010 21:17:25 +0000 (UTC)
Subject: [Python-ideas] An identity dict
References: <loom.20100530T052013-34@post.gmane.org>
	<htta0r$9l5$1@dough.gmane.org>
	<loom.20100601T043013-773@post.gmane.org>
	<7AC8DB63-DAD6-46EA-89B1-AA339E4D7B43@gmail.com>
Message-ID: <loom.20100601T230120-288@post.gmane.org>

Raymond Hettinger <raymond.hettinger at ...> writes:
> Benjamin, could you elaborate of several points that are unclear:
> 
> * If id() is expensive in PyPy, then how are they helped?by the code in?
> http://codespeak.net/svn/pypy/trunk/pypy/lib/identity_dict.py
> which uses id() for the gets and sets and contains?

At the top of that file, it imports from the special module __pypy__ which
contains an optimized version of the dict.

> 
> * In the examples you posted (such
as?http://codespeak.net/svn/pypy/trunk/pypy/tool/algo/graphlib.py?),
> it appears that PyPy already has an identity dict, ?so how are they helped by
adding one to the collections module?

My purpose with those examples was to prove it as a generally useful utility.

> 
> * Most of the posted examples already work with regular dicts (which check
identity before they check equality) -- don't the other implementations already
implement regular dicts which need to have identity-implied-equality in order to
pass the test suite? ?I would expect the following snippet to work under all
versions and implementations of Python:
> 
> 
> ?? ?>>> class A:?
> ?? ?... ? ? ? ? pass
> ?? ?>>> a = A()
> ?? ?>>> d = {a: 10}
> ?? ?>>> assert d[a] == 10 ? # uses a's identity for lookup

Yes, but that would be different if you have two "a"s with __eq__ defined to be
equal and you want to hash them separately.

> 
> * Is the proposal something needed for all implementations or is it just an
optimization for a particular, non-CPython implementation?

My contention is that an identity dictionary or at least a dictionary with
custom hash and keys is a useful primitive that should be in the standard
library. However, I also see its advantage in avoiding bad performance of id()
based identity dicts in non-CPython implementations.

It is useful to let the implementation optimize it any time there is moving GC
as in Jython and IronPython where id also is expensive. (Basically a mapping has
to be maintained for all objects on which id is called.)





From ziade.tarek at gmail.com  Tue Jun  1 23:40:57 2010
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Tue, 1 Jun 2010 23:40:57 +0200
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTikbgeXYippkWCP8O3ULPnAd9HoOnCSfYIwWdla-@mail.gmail.com>
References: <AANLkTin_CD2aZ2xlgo4Uh_MPYQ7ajFMqOwt-L6gOR_nR@mail.gmail.com>
	<AANLkTikbgeXYippkWCP8O3ULPnAd9HoOnCSfYIwWdla-@mail.gmail.com>
Message-ID: <AANLkTik4LlDnn71hnGasmsZnszUguGq6WhNrGgn0pNUQ@mail.gmail.com>

On Tue, Jun 1, 2010 at 4:46 PM, Jesse Noller <jnoller at gmail.com> wrote:
[..]
> I dislike this more than I thought I would - I would rather have the
> stdlib broken out from core and have it have more releases than the
> whole of python then allowing for piecemeal "blessed" upgrades.
> Allowing piecemeal upgrades of the stdlib means you have to say
> something akin to:
>
> "I support Python 2.6, with the upgraded unittest (2.6.1.3), socket
> (2.6.1.2) and multiprocessing modules"
>
> And so on. Sure, API compatibility should be "fine" - but we all know
> that there are exceptions to the rule all the time, and that alone is
> enough to put the nix on allowing arbitrary upgrades of individual
> modules within the standard lib. For package authors, and users, the
> simple "I support 2.6" statement is key. For corporations with strict
> upgrade checks and verifications, the same applies.


What I expect would be for some projects to state :

  "I support Python 2.6, with the upgraded unittest (2.6.1.3), or Python 3.2"

Instead of:

   "I support Python 2.6, with unittest2 or Python 3.2 with its own unittest"

Because the latter makes more work in the project itself, (and no
difference on the
corporation/end user side) where it has to deal with two different
unittest versions. Well, the
same, but with a different namespace that is used to be able to
install it on previous
Python versions besides the stdlib one.

At some point, if a package or module in the stdlib evolve in a
backward compatible
way, it would be nice to be able to upgrade an existing Python installation.

And this is going to be more and more true with the moratorium I guess: what
people are creating now for Python should work in a wider range of Pythons.

Now, releasing the stdlib on its own and shortening its cycle would also resolve
the problem we have. But then, while there will be less combinations,
the problems you have mentioned will remain the same.
Just replace in your example "I support Python 2.6, with the upgraded unittest
(2.6.1.3), socket (2.6.1.2) and multiprocessing modules" by  "I
support Python 2.6,
with the upgraded stdlib 2.6.1.2".

Regards,
Tarek
-- 
Tarek Ziad? | http://ziade.org


From ziade.tarek at gmail.com  Tue Jun  1 23:45:49 2010
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Tue, 1 Jun 2010 23:45:49 +0200
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTin5asM-pWaDzh_ntQO1fWigUXIx7FidMBbR3I8i@mail.gmail.com>
References: <AANLkTin_CD2aZ2xlgo4Uh_MPYQ7ajFMqOwt-L6gOR_nR@mail.gmail.com>
	<AANLkTin5asM-pWaDzh_ntQO1fWigUXIx7FidMBbR3I8i@mail.gmail.com>
Message-ID: <AANLkTim6d-TVK78Bg5XDf5Vuz4mD9GgioeMLPiZoW6Ut@mail.gmail.com>

On Tue, Jun 1, 2010 at 8:12 PM, geremy condra <debatem1 at gmail.com> wrote:
> On Tue, Jun 1, 2010 at 1:54 AM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
>> Hello,
>>
>> That's not a new idea, but I'd like to throw it here again.
>>
>> Some modules/packages in the stdlib are pretty isolated, which means
>> that they could be upgraded with no
>> harm, independently from the rest. For example the unittest package,
>> or the email package.
>
> What advantage do you see in this relative to, say, breaking off the
> stdlib or introducing a sumo addon?

Making it easier for package or module maintainers to take care of
doing those smaller releases.

>
>> Here's an idea:
>>
>> 1 - add a version number in each package or module of the stdlib that
>> is potentially upgradable
>
> As in, append it to the module name, or add an interface to modules
> to query their version?

probably by adding a __version__ in the package's __init__.py or in
the module itself.

Regards
Tarek
-- 
Tarek Ziad? | http://ziade.org


From ziade.tarek at gmail.com  Tue Jun  1 23:53:20 2010
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Tue, 1 Jun 2010 23:53:20 +0200
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com>
References: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com>
Message-ID: <AANLkTilX5nCEL4EqVzHvCc9dXGaiiIR89BqJsozK19QI@mail.gmail.com>

On Tue, Jun 1, 2010 at 8:13 PM, Ian Bicking <ianb at colorstudy.com> wrote:
[..]
> 4. The standard library is one big chunk of functionality, upgraded all
> under one version number, and specifically works together (though in
> practice cross-module refactorings are uncommon).
>
> There's positive things about these features, but 4 really drives me nuts,
> and I think is a strong disincentive to putting stuff into the standard
> library.? For packaging I think 4 actively damages maintainability.
>
> Packaging is at the intersection of several systems:
>
> * Python versions
> * Forward and backward compatibility with distributed libraries
> * System policies (e.g., Debian has changed things around a lot in the last
> few years)
> * A whole other ecosystem of libraries outside of Python (e.g., binding to C
> libraries)
> * Various developer toolkits, some Python specific (e.g., Cython) some not
> (gcc)
>
> I don't think it's practical to think that we can determine some scope of
> packaging where it will be stable in the long term, all these things are
> changing and many are changing without any particular concern for how it
> affects Python (i.e., packaging must be reactive).? And frankly we clearly
> do not have packaging figured out, we're still circling in on something...
> and I think the circling will be more like a Strange Attractor than a sink
> drain.

Are you suggesting to have a third layer ?

* Python
* stdlib
* stdlib-extras (distutils2, pip, etc)

is that what some people called a "sumo" release of Python ?


Tarek
-- 
Tarek Ziad? | http://ziade.org


From qrczak at knm.org.pl  Wed Jun  2 00:05:31 2010
From: qrczak at knm.org.pl (Marcin 'Qrczak' Kowalczyk)
Date: Wed, 2 Jun 2010 00:05:31 +0200
Subject: [Python-ideas] An identity dict
In-Reply-To: <loom.20100601T230120-288@post.gmane.org>
References: <loom.20100530T052013-34@post.gmane.org>
	<htta0r$9l5$1@dough.gmane.org> 
	<loom.20100601T043013-773@post.gmane.org>
	<7AC8DB63-DAD6-46EA-89B1-AA339E4D7B43@gmail.com> 
	<loom.20100601T230120-288@post.gmane.org>
Message-ID: <AANLkTinJBOTYlBCPYQbXNb2-clDXcL47thCSDLSRNKBk@mail.gmail.com>

2010/6/1 Benjamin Peterson <benjamin at python.org>:

> My contention is that an identity dictionary or at least a dictionary with
> custom hash and keys is a useful primitive that should be in the standard
> library. However, I also see its advantage in avoiding bad performance of id()
> based identity dicts in non-CPython implementations.
>
> It is useful to let the implementation optimize it any time there is moving GC
> as in Jython and IronPython where id also is expensive. (Basically a mapping has
> to be maintained for all objects on which id is called.)

Here is how I designed this for my language:

You can request the ObjectId of the given object. If an ObjectId
corresponding to the given object is still alive, you always get it
back again, but it can be GC'ed and later created afresh. ObjectIds
are hashable and comparable (with an arbitrary ordering). Hash values
and the ordering are preserved when ObjectIds are kept alive, but they
may be different if ObjectIds are created afresh. An ObjectId contains
an integer index which is unique among ObjectIds being alive at the
same time.

You can make a dictionary with a specified key function. It is
internally backed by something equivalent to f(k) -> (k, v) dict. A
dictionary with ObjectId constructor as the key is an identity
dictionary; it works because it keeps both k and f(k) alive.

An advantage of this scheme is that with a moving GC the id mapping
must be maintained only for objects for which the program keeps their
ObjectIds alive.

A disadvantage is that the program must be careful to not use
ObjectIds in a manner which does not keep them alive yet expects
consistent hashing and ordering. In particular a key-mapped dict which
would store only (k, v) pairs and compute f(k) on the fly would not
work. Also ObjectIds cannot be used to generate printable unique
identifiers which would be valid without having to keep ObjectIds
alive, like in Python's default repr.

-- 
Marcin Kowalczyk


From debatem1 at gmail.com  Wed Jun  2 00:06:15 2010
From: debatem1 at gmail.com (geremy condra)
Date: Tue, 1 Jun 2010 18:06:15 -0400
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTilX5nCEL4EqVzHvCc9dXGaiiIR89BqJsozK19QI@mail.gmail.com>
References: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com>
	<AANLkTilX5nCEL4EqVzHvCc9dXGaiiIR89BqJsozK19QI@mail.gmail.com>
Message-ID: <AANLkTimMm1lJYiKnwoxlwpUuOF7ZoibjFPzQZVM7EnaN@mail.gmail.com>

On Tue, Jun 1, 2010 at 5:53 PM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
> On Tue, Jun 1, 2010 at 8:13 PM, Ian Bicking <ianb at colorstudy.com> wrote:
> [..]
>> 4. The standard library is one big chunk of functionality, upgraded all
>> under one version number, and specifically works together (though in
>> practice cross-module refactorings are uncommon).
>>
>> There's positive things about these features, but 4 really drives me nuts,
>> and I think is a strong disincentive to putting stuff into the standard
>> library.? For packaging I think 4 actively damages maintainability.
>>
>> Packaging is at the intersection of several systems:
>>
>> * Python versions
>> * Forward and backward compatibility with distributed libraries
>> * System policies (e.g., Debian has changed things around a lot in the last
>> few years)
>> * A whole other ecosystem of libraries outside of Python (e.g., binding to C
>> libraries)
>> * Various developer toolkits, some Python specific (e.g., Cython) some not
>> (gcc)
>>
>> I don't think it's practical to think that we can determine some scope of
>> packaging where it will be stable in the long term, all these things are
>> changing and many are changing without any particular concern for how it
>> affects Python (i.e., packaging must be reactive).? And frankly we clearly
>> do not have packaging figured out, we're still circling in on something...
>> and I think the circling will be more like a Strange Attractor than a sink
>> drain.
>
> Are you suggesting to have a third layer ?
>
> * Python
> * stdlib
> * stdlib-extras (distutils2, pip, etc)
>
> is that what some people called a "sumo" release of Python ?
>
>
> Tarek

That's what I've been advocating.

Geremy Condra


From guido at python.org  Wed Jun  2 00:40:04 2010
From: guido at python.org (Guido van Rossum)
Date: Tue, 1 Jun 2010 15:40:04 -0700
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTimHmZOjXIKVeH4laoydrMYMEtolRmsBCy9SMkai@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com> 
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com> 
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com> 
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com> 
	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com> 
	<AANLkTikMlxL-JNM8WOO43xanPjNICN91iRZOFDrOKyJo@mail.gmail.com> 
	<AANLkTikNcigdM2bPmxZFQs8lPB7yotEhLOPqzyzAFknA@mail.gmail.com> 
	<AANLkTimHmZOjXIKVeH4laoydrMYMEtolRmsBCy9SMkai@mail.gmail.com>
Message-ID: <AANLkTimTf-kzzgd2ll8yKrhEL-ytPtPtr7PX4pfsyvbP@mail.gmail.com>

On Tue, Jun 1, 2010 at 11:10 AM, Alexander Belopolsky
<alexander.belopolsky at gmail.com> wrote:
> On Tue, Jun 1, 2010 at 12:17 PM, Guido van Rossum <guido at python.org> wrote:
> ..
>> I expect this will cause a lot of subtle issues.
>
> I will try to answer to those.
>
>> E.g. What should
>> comparison of an unnormalized datetime value to an equivalent
>> normalized datetime value yield?
>
> I am not proposing supporting arbitrary unnormalized datetime values,
> only to allow seconds (0 - 60). ?I am not proposing any notion of
> "equivalent datetime" objects either. ?POSIX will require that for ?t1
> = datetime(1985, 6, 30, 23, 59, 60) and t2 =datetime(1985, 7, 1, 0, 0,
> 0) time.mktime(t1.timetuple()) == time.mktime(t2.timetuple()), but
> this does not mean that t1 and t2 should compare equal.
>
> It is a more subtle issue, what difference t1 - t2 should produce. ?I
> think it can be defined as difference in corresponding POSIX times.

But consistency within the API is out the window. Currently datetimes
are linearly ordered like numbers and if a difference is zero the two
values are the same.

I think it would be safer for your use case to either store the tuple
or the string representation, if you really need to represent a leap
second.

Also note that there will be no validation possible for future
datetimes (and for past dates it would require an up-to-date leap
second database).

>> How far will you go? Is
>> datetime.datetime(2010, 6, 1, 36, 0, 0) a way of spelling
>> datetime.datetime(2010, 6, 2, 12, 0 0) ?
>
> I would not go any further than extending seconds to 0-60 range which
> is common to many modern standards.

That's good.

>> How do you force
>> normalization?
>
> Normalization is never forced. ?A round trip through POSIX timestamp
> will naturally produce normalized datetime objects.

Well code that for whatever reason wants normalized timestamps only
will have to know about this method to force normalization, so it
would be a backwards incompatibility (since currently one can assume
that *all* datetime objects are normalized).

>> Won't it break apps if the .seconds attribute can be
>> out of range or if normalization calls need to be inserted?
>
> Many standards require that seconds range be 0-60. ?Applications that
> obtain time from timetuples should already be prepared to handle this
> range to be POSIX compliant. ? Note that I do not propose changing
> internal sources of datetime objects such as datetime.now() to return
> dt.seconds == 60.Therefore all extended range times will originate
> outside of the datetime library. ?Current application should already
> validate such sources before passing them to datetime library. ?Of
> course an application that relies on constructor throwing an exception
> for validation and then asserts that seconds < 60 will break, but this
> can be addressed by proper deprecation schedule. ?Maybe even starting
> with enabling ?extended seconds range with a from __future__ import.

I see nothing but trouble down this road.

Also: http://en.wikipedia.org/wiki/Leap_second#Proposal_to_abolish_leap_seconds

[and later]
On Tue, Jun 1, 2010 at 11:36 AM, Alexander Belopolsky
<alexander.belopolsky at gmail.com> wrote:
> On Tue, Jun 1, 2010 at 1:44 PM, MRAB <python at mrabarnett.plus.com> wrote:
> ..
>>> but
>>>>>>
>>>>>> datetime(1985, 6, 30, 23, 59, 60) - datetime(1985, 7, 1, 0, 0, 0)
>>>
>>> datetime.timedelta(0)
>>>
>> Actually, that's wrong because there was a leap second. The clock went:
>>
>>    1985-06-30 23:59:59
>>    1985-06-30 23:59:60
>>    1985-07-01 00:00:00
>>
>> The following year, however, it went:
>>
>>    1986-06-30 23:59:59
>>    1986-07-01 00:00:00
>
> It is only wrong if you expect datetime difference to reflect the
> actual duration between the corresponding UTC events.

What on earth do you mean by *actual duration*? Most datetimes are
derived from clocks that aren't accurate to a second even.

> The datetime
> library does not do it even for dates.
>
> For example, on my system
>
>>>> date(1752, 9, 14) - date(1752, 9, 2)
> datetime.timedelta(12)
>
> even though calendar application on the same machine shows that
> September 14 was the day after September 2 in 1752.

And here you are mixing topics completely -- calendar reform is a
completely different topic from leap seconds.

> $ cal 9 1752
>   September 1752
> Su Mo Tu We Th Fr Sa
>       1  2 14 15 16
> 17 18 19 20 21 22 23
> 24 25 26 27 28 29 30
>
> This was a deliberate design choice to implement proleptic calendar
> rather than a historically more accurate variant.  Similarly I see
> nothing wrong with datetime difference not capturing leap seconds.  An
> application interested in leap seconds effects, however should still
> be able to use the basic datetime object and define its own duration
> functions.

You haven't proven this need at all, and your reference to calendar
reform (which by the way didn't happen in the same year or even
century everywhere) makes it weaker still.

I've put my foot down against leap seconds once before (when datetime
was introduced) and I will do it again.

-- 
--Guido van Rossum (python.org/~guido)


From jnoller at gmail.com  Wed Jun  2 02:06:21 2010
From: jnoller at gmail.com (Jesse Noller)
Date: Tue, 1 Jun 2010 20:06:21 -0400
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTik4LlDnn71hnGasmsZnszUguGq6WhNrGgn0pNUQ@mail.gmail.com>
References: <AANLkTin_CD2aZ2xlgo4Uh_MPYQ7ajFMqOwt-L6gOR_nR@mail.gmail.com>
	<AANLkTikbgeXYippkWCP8O3ULPnAd9HoOnCSfYIwWdla-@mail.gmail.com>
	<AANLkTik4LlDnn71hnGasmsZnszUguGq6WhNrGgn0pNUQ@mail.gmail.com>
Message-ID: <AANLkTimRQhPnAi5vym22HJMODK_SCUQWoeIMTcv-I7EH@mail.gmail.com>

On Tue, Jun 1, 2010 at 5:40 PM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
> On Tue, Jun 1, 2010 at 4:46 PM, Jesse Noller <jnoller at gmail.com> wrote:
> [..]
>> I dislike this more than I thought I would - I would rather have the
>> stdlib broken out from core and have it have more releases than the
>> whole of python then allowing for piecemeal "blessed" upgrades.
>> Allowing piecemeal upgrades of the stdlib means you have to say
>> something akin to:
>>
>> "I support Python 2.6, with the upgraded unittest (2.6.1.3), socket
>> (2.6.1.2) and multiprocessing modules"
>>
>> And so on. Sure, API compatibility should be "fine" - but we all know
>> that there are exceptions to the rule all the time, and that alone is
>> enough to put the nix on allowing arbitrary upgrades of individual
>> modules within the standard lib. For package authors, and users, the
>> simple "I support 2.6" statement is key. For corporations with strict
>> upgrade checks and verifications, the same applies.
>
>
> What I expect would be for some projects to state :
>
> ?"I support Python 2.6, with the upgraded unittest (2.6.1.3), or Python 3.2"
>
> Instead of:
>
> ? "I support Python 2.6, with unittest2 or Python 3.2 with its own unittest"
>
>
> Because the latter makes more work in the project itself, (and no
> difference on the
> corporation/end user side) where it has to deal with two different
> unittest versions. Well, the
> same, but with a different namespace that is used to be able to
> install it on previous
> Python versions besides the stdlib one.

Either fight is a losing one. In the first, you're requiring that
someone *fundamentally alter* their standard library install to monkey
patch in something with the same name, which means it may or may not
break something else - which makes it a non zero risk, and therefore,
unacceptable to a lot of people.

The second requires that the user install an external package; which;
until we include something as a standard, is a fools errand, only to
be taken on by the bravest of people (that might be hyperbole ;))

In all seriousness - the second you ask people to alter one tiny
sliver of the stdlib for the sake of your unique-snowflake project or
app, you've lost. The stdlib is a critical piece of Python - and it's
relative integrity is assumed when people download it from the
python.org website. Asking them to download it from the site, and then
possibly install piecemeal upgrades seems like a bad idea.

Imagine a future where project dependencies look like this:

Python 2.7.1
Python 2.7.1 with upgraded unittest
Python 2.7.1 with upgraded unittest, socket, multiprocessing
Python 2.7.1 with upgraded unittest, socket, multiprocessing, httplib

And so on - sure, eventually (say, 6 months later) there might be a
2.7.2 with all of those changes rolled in, but that begs the question
- why release them individually when you know there's another release
coming shortly, and avoid the confusion?

> At some point, if a package or module in the stdlib evolve in a
> backward compatible
> way, it would be nice to be able to upgrade an existing Python installation.

Yes, but wouldn't it also be nice to simply have a built in package
installation script, and a shorter (say, 6 month) release cycle for
patch releases which maintain the backwards compatibility guarantee?
This way, bug fixes can move more quickly.

We're really discussing the window from a release, to the next - which
could easily be shortened lacking language changes (stdlib bugfixes
only).

> And this is going to be more and more true with the moratorium I guess: what
> people are creating now for Python should work in a wider range of Pythons.
>
> Now, releasing the stdlib on its own and shortening its cycle would also resolve
> the problem we have. But then, while there will be less combinations,
> the problems you have mentioned will remain the same.
> Just replace in your example "I support Python 2.6, with the upgraded unittest
> (2.6.1.3), socket (2.6.1.2) and multiprocessing modules" by ?"I
> support Python 2.6,
> with the upgraded stdlib 2.6.1.2".

Then don't fragment it - just release everything more rapidly. And
then suffer from the fact that OS vendors don't pick up your releases
quickly, and so on and so forth.

All I'm trying to say, is that allowing piecemeal upgrades of stdlib
modules is a risky prospect - I know plenty of people (myself
included) who write code which is ONLY dependent on the standard
library intentionally, to save ourselves from the
packaging/installation/etc heartache.

This isn't just because of the pain of installing, or dependency
management issues - it's because the stdlib is known, stable and the
one requirement we can rely on, other than the language itself. The
stdlib doesn't require anyone to install from github, or bitbucket, or
figure out distribute vs. distutils2 - it's just there, despite all
it's warts and dusty corners.

jesse


From brett at python.org  Wed Jun  2 02:50:57 2010
From: brett at python.org (Brett Cannon)
Date: Tue, 1 Jun 2010 17:50:57 -0700
Subject: [Python-ideas] lack of time zone support
In-Reply-To: <51245.1275417154@parc.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com> 
	<51245.1275417154@parc.com>
Message-ID: <AANLkTilEf7dnxd6TqWhb0bhR3tNlZ5z6dUiLyoZam6aZ@mail.gmail.com>

On Tue, Jun 1, 2010 at 11:32, Bill Janssen <janssen at parc.com> wrote:
> To me, the single most irritating problem with the Python support for
> date/time is the lack of support for time-zone understanding. ?This
> breaks down into two major issues, %z and lack of a standard time-zone
> table.
>
> First, let's say I have to parse a Medusa log file, which contains time
> stamps in the form "DD/Mon/YYYY:HH:MM:SS [+|-]HHMM", e.g.
> "31/May/2010:07:10:04 -0800". ?What I'd like to write is
>
> ?tm = time.mktime(time.strptime(timestamp, "%d/%b/%Y:%H:%M:%S %z"))
>
> which is what I'd do if I was writing in C. ?But no! ?The Python
> _strptime module doesn't support "%z". ?So instead, I have to pick the
> timestamp apart and do things separately and remember that "-0800" isn't
> octal, and also isn't the same as -800, and remember whether to add or
> subtract it. ?This seems insane. ?So, IMO, support for %z should be
> added to Lib/_strptime.py. ?We need a patch.
>
> Secondly, we really need concrete subclasses of tzinfo, and some sort of
> mapping. ?Lots of people have spent lots of time trying to figure out
> this cryptic hint in datetime: "The datetime module does not supply any
> concrete subclasses of tzinfo." ?I'm not sure whether pytz is the best
> ideas, or what I use, the "zoneinfo" module from python-dateutil. ?With
> that, I still have to add the Windows timezone names, using the table at
> http://unicode.org/repos/cldr/trunk/common/supplemental/windowsZones.xml,
> because the code in python-dateutil only works with Windows timezone
> names when running on Windows.

First of all, there will never be a timezone table in the stdlib,
period. This has been brought up before and is always shot down
because python-dev does not want to have to keep track of timezone
changes. pytz and other modules fit that bill fine.

Now if you want UTC, that's different. Alexander already linked to an
issue that is discussing that end. The current proposal is to provide
a generic class that creates fixed UTC-offset timezones, with an
instance for UTC set on the datetime module.

If you get that class in, you could then patch _strptime to support
the %z directive so as to return a timezone that had a set UTC-offset.
Not optimal, but it's something. Otherwise you would need to patch
_strptime to simply consume the number which I don't think anyone
wants.


From janssen at parc.com  Wed Jun  2 03:18:28 2010
From: janssen at parc.com (Bill Janssen)
Date: Tue, 1 Jun 2010 18:18:28 PDT
Subject: [Python-ideas] lack of time zone support
In-Reply-To: <AANLkTilEf7dnxd6TqWhb0bhR3tNlZ5z6dUiLyoZam6aZ@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
	<51245.1275417154@parc.com>
	<AANLkTilEf7dnxd6TqWhb0bhR3tNlZ5z6dUiLyoZam6aZ@mail.gmail.com>
Message-ID: <56465.1275441508@parc.com>

Brett Cannon <brett at python.org> wrote:

> First of all, there will never be a timezone table in the stdlib,
> period. This has been brought up before and is always shot down
> because python-dev does not want to have to keep track of timezone
> changes. pytz and other modules fit that bill fine.

Sure, sure.  Though I'm not sure that it has to be "in" the standard
library to be part of the standard library.  Past time for CPython to
start thinking about on-demand data, pulled dynamically from "the
cloud", with a static version for backup.  Just a thought...

> Now if you want UTC, that's different. Alexander already linked to an
> issue that is discussing that end. The current proposal is to provide
> a generic class that creates fixed UTC-offset timezones, with an
> instance for UTC set on the datetime module.

Yes, I've been following that.  Very promising.

> If you get that class in, you could then patch _strptime to support
> the %z directive so as to return a timezone that had a set UTC-offset.
> Not optimal, but it's something.

Yes, exactly.

> Otherwise you would need to patch _strptime to simply consume the
> number which I don't think anyone wants.

No.

Bill


From brett at python.org  Wed Jun  2 03:22:38 2010
From: brett at python.org (Brett Cannon)
Date: Tue, 1 Jun 2010 18:22:38 -0700
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com>
References: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com>
Message-ID: <AANLkTikJ68fG0_om6Y6NYZ5SomnUrdkP_Vqpu8laeMPR@mail.gmail.com>

On Tue, Jun 1, 2010 at 11:13, Ian Bicking <ianb at colorstudy.com> wrote:
> Threading will probably break here as I wasn't on the list for the first
> email...
>
> My concern with the standard library is that there's a couple things going
> on:
>
> 1. The standard library represents "accepted" functionality, kind of best
> practice, kind of just conventional.? Everyone (roughly) knows what you are
> talking about when you use things from the standard library.
> 2. The standard library has some firm backward compatibility guarantees.? It
> also has some firm stability guarantees, especially within releases (though
> in practice, nearly for eternity).
> 3. The standard library is kind of collectively owned; it's not up to the
> whims of one person, and can't be abandoned.
> 4. The standard library is one big chunk of functionality, upgraded all
> under one version number, and specifically works together (though in
> practice cross-module refactorings are uncommon).
>
> There's positive things about these features, but 4 really drives me nuts,
> and I think is a strong disincentive to putting stuff into the standard
> library.? For packaging I think 4 actively damages maintainability.
>
> Packaging is at the intersection of several systems:
>
> * Python versions
> * Forward and backward compatibility with distributed libraries
> * System policies (e.g., Debian has changed things around a lot in the last
> few years)
> * A whole other ecosystem of libraries outside of Python (e.g., binding to C
> libraries)
> * Various developer toolkits, some Python specific (e.g., Cython) some not
> (gcc)
>
> I don't think it's practical to think that we can determine some scope of
> packaging where it will be stable in the long term, all these things are
> changing and many are changing without any particular concern for how it
> affects Python (i.e., packaging must be reactive).? And frankly we clearly
> do not have packaging figured out, we're still circling in on something...
> and I think the circling will be more like a Strange Attractor than a sink
> drain.
>
> The issues exist for other libraries that aren't packaging-related, of
> course, it's just worse for packaging.? argparse for instance is not
> "done"... it has bugs that won't be fixed before release, and functionality
> that it should reasonably include.? But there's no path for it to get
> better.? Will it have new and better features in Python 3.3?? Who seriously
> wants to write code that is only compatible with Python 3.3+ just because of
> some feature in argparse?? Instead everyone will work around argparse as it
> currently exists.? In the process they'll probably use undocumented APIs,
> further calcifying the library and making future improvements disruptive.
>
> It's not very specific to argparse, I think ElementTree has similar issues.
> The json library is fairly unique in that it has a scope that can be
> "done".? I don't know what to say about wsgiref... it's completely
> irrelevant in Python 3 because it was upgraded along the Python schedule
> despite being unready to be released (this is relatively harmless as I don't
> think anyone is using wsgiref in Python 3).
>
> So, this is the tension I see.? I think aspects of the standard library
> process and its guarantees are useful, but the current process means
> releasing code that isn't ready or not releasing code that should be
> released, and neither is good practice and both compromise those
> guarantees.? Lots of moving versions can indeed be difficult to manage...
> though it can be made a lot easier with good practices.? Though even then
> distutils2 (and pip) does not even fit into that... they both enter into the
> workflow before you start working with libraries and versions, making them
> somewhat unique (though also giving them some more flexibility as they are
> not so strongly tied to the Python runtime, which is where stability
> requirements are most needed).

I can only see two scenarios that might be considered acceptable to
address these issues.

One is that when new modules are accepted into the stdlib they are
flagged with a ExpermintalWarning so that people know that no
backwards-compatibility promises have been made yet. That gets the
module more exposure and gets python-dev real-world feedback to fix
issues before the module calcifies into a strong
backwards-compatibility. With that experience more proper decisions
can be made as to how to change things (e.g. the logging module's
default timestamp including microseconds which strptime cannot parse).

Otherwise we shift to an annual release schedule, but alternate Python
versions have a language moratorium. That would mean only new language
features every two years, but a new stdlib annually.

But one thing I can tell you is that having separate module releases
of what is in the stdlib under the same name or doing a separate
stdlib release will not happen. Python-dev as a whole does not like
this idea and I don't see that changing.


From brett at python.org  Wed Jun  2 03:24:01 2010
From: brett at python.org (Brett Cannon)
Date: Tue, 1 Jun 2010 18:24:01 -0700
Subject: [Python-ideas] lack of time zone support
In-Reply-To: <56465.1275441508@parc.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com> 
	<51245.1275417154@parc.com>
	<AANLkTilEf7dnxd6TqWhb0bhR3tNlZ5z6dUiLyoZam6aZ@mail.gmail.com> 
	<56465.1275441508@parc.com>
Message-ID: <AANLkTimrsDxVpMBCORjUxJRdX_5onJb6oSihVAv0sQWt@mail.gmail.com>

On Tue, Jun 1, 2010 at 18:18, Bill Janssen <janssen at parc.com> wrote:
> Brett Cannon <brett at python.org> wrote:
>
>> First of all, there will never be a timezone table in the stdlib,
>> period. This has been brought up before and is always shot down
>> because python-dev does not want to have to keep track of timezone
>> changes. pytz and other modules fit that bill fine.
>
> Sure, sure. ?Though I'm not sure that it has to be "in" the standard
> library to be part of the standard library. ?Past time for CPython to
> start thinking about on-demand data, pulled dynamically from "the
> cloud", with a static version for backup. ?Just a thought...
>
>> Now if you want UTC, that's different. Alexander already linked to an
>> issue that is discussing that end. The current proposal is to provide
>> a generic class that creates fixed UTC-offset timezones, with an
>> instance for UTC set on the datetime module.
>
> Yes, I've been following that. ?Very promising.

Just need a patch. =)

>
>> If you get that class in, you could then patch _strptime to support
>> the %z directive so as to return a timezone that had a set UTC-offset.
>> Not optimal, but it's something.
>
> Yes, exactly.

Then that's fine. Get the fixed offset timezone in and then get a
patch for this and I don't see resistance.


From jnoller at gmail.com  Wed Jun  2 03:33:43 2010
From: jnoller at gmail.com (Jesse Noller)
Date: Tue, 1 Jun 2010 21:33:43 -0400
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTikJ68fG0_om6Y6NYZ5SomnUrdkP_Vqpu8laeMPR@mail.gmail.com>
References: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com>
	<AANLkTikJ68fG0_om6Y6NYZ5SomnUrdkP_Vqpu8laeMPR@mail.gmail.com>
Message-ID: <AANLkTinA8wczbAtHfl89ZIgsYqvPkuWe7I97jDlt7noV@mail.gmail.com>

On Tue, Jun 1, 2010 at 9:22 PM, Brett Cannon <brett at python.org> wrote:
> On Tue, Jun 1, 2010 at 11:13, Ian Bicking <ianb at colorstudy.com> wrote:
>> Threading will probably break here as I wasn't on the list for the first
>> email...
>>
>> My concern with the standard library is that there's a couple things going
>> on:
>>
>> 1. The standard library represents "accepted" functionality, kind of best
>> practice, kind of just conventional.? Everyone (roughly) knows what you are
>> talking about when you use things from the standard library.
>> 2. The standard library has some firm backward compatibility guarantees.? It
>> also has some firm stability guarantees, especially within releases (though
>> in practice, nearly for eternity).
>> 3. The standard library is kind of collectively owned; it's not up to the
>> whims of one person, and can't be abandoned.
>> 4. The standard library is one big chunk of functionality, upgraded all
>> under one version number, and specifically works together (though in
>> practice cross-module refactorings are uncommon).
>>
>> There's positive things about these features, but 4 really drives me nuts,
>> and I think is a strong disincentive to putting stuff into the standard
>> library.? For packaging I think 4 actively damages maintainability.
>>
>> Packaging is at the intersection of several systems:
>>
>> * Python versions
>> * Forward and backward compatibility with distributed libraries
>> * System policies (e.g., Debian has changed things around a lot in the last
>> few years)
>> * A whole other ecosystem of libraries outside of Python (e.g., binding to C
>> libraries)
>> * Various developer toolkits, some Python specific (e.g., Cython) some not
>> (gcc)
>>
>> I don't think it's practical to think that we can determine some scope of
>> packaging where it will be stable in the long term, all these things are
>> changing and many are changing without any particular concern for how it
>> affects Python (i.e., packaging must be reactive).? And frankly we clearly
>> do not have packaging figured out, we're still circling in on something...
>> and I think the circling will be more like a Strange Attractor than a sink
>> drain.
>>
>> The issues exist for other libraries that aren't packaging-related, of
>> course, it's just worse for packaging.? argparse for instance is not
>> "done"... it has bugs that won't be fixed before release, and functionality
>> that it should reasonably include.? But there's no path for it to get
>> better.? Will it have new and better features in Python 3.3?? Who seriously
>> wants to write code that is only compatible with Python 3.3+ just because of
>> some feature in argparse?? Instead everyone will work around argparse as it
>> currently exists.? In the process they'll probably use undocumented APIs,
>> further calcifying the library and making future improvements disruptive.
>>
>> It's not very specific to argparse, I think ElementTree has similar issues.
>> The json library is fairly unique in that it has a scope that can be
>> "done".? I don't know what to say about wsgiref... it's completely
>> irrelevant in Python 3 because it was upgraded along the Python schedule
>> despite being unready to be released (this is relatively harmless as I don't
>> think anyone is using wsgiref in Python 3).
>>
>> So, this is the tension I see.? I think aspects of the standard library
>> process and its guarantees are useful, but the current process means
>> releasing code that isn't ready or not releasing code that should be
>> released, and neither is good practice and both compromise those
>> guarantees.? Lots of moving versions can indeed be difficult to manage...
>> though it can be made a lot easier with good practices.? Though even then
>> distutils2 (and pip) does not even fit into that... they both enter into the
>> workflow before you start working with libraries and versions, making them
>> somewhat unique (though also giving them some more flexibility as they are
>> not so strongly tied to the Python runtime, which is where stability
>> requirements are most needed).
>
> I can only see two scenarios that might be considered acceptable to
> address these issues.
>
> One is that when new modules are accepted into the stdlib they are
> flagged with a ExpermintalWarning so that people know that no
> backwards-compatibility promises have been made yet. That gets the
> module more exposure and gets python-dev real-world feedback to fix
> issues before the module calcifies into a strong
> backwards-compatibility. With that experience more proper decisions
> can be made as to how to change things (e.g. the logging module's
> default timestamp including microseconds which strptime cannot parse).
>
> Otherwise we shift to an annual release schedule, but alternate Python
> versions have a language moratorium. That would mean only new language
> features every two years, but a new stdlib annually.

I'm actually partial to this idea - the stdlib, by it's very existence
has to evolve more quickly than the language itself, and it should
fundamentally see more releases to stay up to date, and slightly
fresh.

jesse


From ianb at colorstudy.com  Wed Jun  2 04:29:10 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Tue, 1 Jun 2010 21:29:10 -0500
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTikJ68fG0_om6Y6NYZ5SomnUrdkP_Vqpu8laeMPR@mail.gmail.com>
References: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com> 
	<AANLkTikJ68fG0_om6Y6NYZ5SomnUrdkP_Vqpu8laeMPR@mail.gmail.com>
Message-ID: <AANLkTin4SaJvISnTKrsu-uc_CjFsev99RZpZEL5Dzn1S@mail.gmail.com>

On Tue, Jun 1, 2010 at 8:22 PM, Brett Cannon <brett at python.org> wrote:

> But one thing I can tell you is that having separate module releases
> of what is in the stdlib under the same name or doing a separate
> stdlib release will not happen. Python-dev as a whole does not like
> this idea and I don't see that changing.
>

I have no particular interest in changing the stdlib as it exists now.  It
is what it is, I don't care if there's extra stuff in it and I'm now long
settled into working around all bugs I encounter.  While I think there are
past situations that exemplify certain problems, I'm really just bringing
them up as examples.  But pip and distutils2 aren't settled into anything,
and I don't want us to retrace bad paths just because they are so well trod.

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100601/b78de9df/attachment.html>

From tjreedy at udel.edu  Wed Jun  2 05:01:23 2010
From: tjreedy at udel.edu (Terry Reedy)
Date: Tue, 01 Jun 2010 23:01:23 -0400
Subject: [Python-ideas] Having unbound methods refer to the classes
	their defined on
In-Reply-To: <AANLkTin5VQV07UvGwBSYgYYxf8kpnLfAQ1HrfpG8Ezes@mail.gmail.com>
References: <AANLkTin5VQV07UvGwBSYgYYxf8kpnLfAQ1HrfpG8Ezes@mail.gmail.com>
Message-ID: <hu4hi4$k4g$1@dough.gmane.org>

On 6/1/2010 1:36 PM, cool-RR wrote:

> In Python 2.x there was an "unbound method" type. An unbound method
> would have an attribute `.im_class` that would refer to the class on
> which the method was defined.

Actually, I believe it referred to the class through which the function 
was accessed. The object in the class __dict__ was still the function. 
In both 2.x and 3.x, a function can be an attribute of more than one 
class and might not have been defined 'on' any of them.

Right or wrong, I believe it was thought that adding the wrapper was 
more of a nuisance than a benefit.

I suppose you could propose that when a function is directly accessed as 
a class (as opposed to via an instance, when wrapping as a bound method 
is still done), an __access_class__ attribute could be added, but I do 
not know if that would even help you. Perhaps a custom metaclass could 
be written to do this now (I definitely do not know this for sure).

Terry Jan Reedy



From arnodel at googlemail.com  Wed Jun  2 08:36:54 2010
From: arnodel at googlemail.com (Gmail)
Date: Wed, 2 Jun 2010 07:36:54 +0100
Subject: [Python-ideas] Having unbound methods refer to the classes
	their defined on
In-Reply-To: <AANLkTin5VQV07UvGwBSYgYYxf8kpnLfAQ1HrfpG8Ezes@mail.gmail.com>
References: <AANLkTin5VQV07UvGwBSYgYYxf8kpnLfAQ1HrfpG8Ezes@mail.gmail.com>
Message-ID: <0B87A480-0392-4EFF-BBEC-8947D010844F@gmail.com>


On 1 Jun 2010, at 18:36, cool-RR wrote:

> Hello,
> 
> I would like to raise an issue here that I've been discussing at python-porting.
> 
> (And I'd like to preface by saying that I'm not intimately familiar with Python's innards, so if I make any mistakes please correct me.)
> 
> In Python 2.x there was an "unbound method" type. An unbound method would have an attribute `.im_class` that would refer to the class on which the method was defined. This allowed users to use the `copy_reg` module to pickle unbound methods by name. (In a similar way to how functions and classes are pickled by default.)
> 

Not exactly (python 2.6):

>>> class Foo(object):
...    def f(self): pass
... 
>>> Foo.f
<unbound method Foo.f>
>>> Foo.f.im_class
<class '__main__.Foo'>
>>> class Bar(Foo): pass
... 
>>> bar.f
<unbound method Bar.f>
>>> Bar.f.im_class
<class '__main__.Bar'>


> In Python 3.x unbound methods are plain functions. There is no way of knowing on which class they are defined, so therefore it's impossible to pickle them. It is even impossible to tell `copyreg` to use a custom reducer:
> http://stackoverflow.com/questions/2932742/python-using-copyreg-to-define-reducers-for-types-that-already-have-reducers
> 
> (To the people who wonder why would anyone want to pickle unbound methods: I know that it sounds like a weird thing to do. Keep in mind that sometimes your objects need to get pickled. For example if you're using the multiprocessing module, and you pass into it an object that somehow refers to an unbound method, then that method has to be picklable.)
> 
> The idea is: Let's give unbound methods an attribute that will refer to the class on which they were defined.
> 
> What do you think?

Unbound methods in Python 2.X were objects that were created on class attribute access, not when the class was created, so what you are asking for is different from what Python 2.X provided.  Here is a very simplified way to mimic 2.X in 3.X via metaclasses (Python 3.2):

>>> class FooType(type):
...     def __getattribute__(self, attrname):
...         attr = super().__dict__[attrname]
...         if isinstance(attr, type(lambda:0)):
...             return ("unbound method", self, attr)
...         else:
...             return attr
... 
>>> class Foo(metaclass=FooType):
...     def f(self):pass
... 
>>> Foo.f
('unbound method', <class '__main__.Foo'>, <function f at 0x445fa8>)
>>> Foo().f()
>>>

What you want maybe instead is a metaclass that overrides type.__new__ or type.__init__ so that each function in the attributes of the class is wrapped in some kind of wrapper like this:

class DefinedIn:
    def __init__(self, f, classdef):
        self.classdef = classdef
        self.f = f
    def __call__(self, *args, **kwargs):
        return self.f(*args, **kwargs)

-- 
Arnaud



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100602/e924fa71/attachment.html>

From solipsis at pitrou.net  Wed Jun  2 09:53:51 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 2 Jun 2010 09:53:51 +0200
Subject: [Python-ideas] stdlib upgrades
References: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com>
	<AANLkTikJ68fG0_om6Y6NYZ5SomnUrdkP_Vqpu8laeMPR@mail.gmail.com>
Message-ID: <20100602095351.60ba0f2c@pitrou.net>

On Tue, 1 Jun 2010 18:22:38 -0700
Brett Cannon <brett at python.org> wrote:
> 
> One is that when new modules are accepted into the stdlib they are
> flagged with a ExpermintalWarning

Are you advocating this specific spelling?

> Otherwise we shift to an annual release schedule, but alternate Python
> versions have a language moratorium. That would mean only new language
> features every two years, but a new stdlib annually.

I think this has already been shot down by Guido (I think I was the one
who asked last time :-)). Basically, even if you aren't adding new
language features, you are still compelling people to upgrade to a new
version with (very probably) slight compatibility annoyances.





From cool-rr at cool-rr.com  Wed Jun  2 10:59:45 2010
From: cool-rr at cool-rr.com (cool-RR)
Date: Wed, 2 Jun 2010 10:59:45 +0200
Subject: [Python-ideas] Having unbound methods refer to the classes
	their defined on
In-Reply-To: <0B87A480-0392-4EFF-BBEC-8947D010844F@gmail.com>
References: <AANLkTin5VQV07UvGwBSYgYYxf8kpnLfAQ1HrfpG8Ezes@mail.gmail.com>
	<0B87A480-0392-4EFF-BBEC-8947D010844F@gmail.com>
Message-ID: <AANLkTindWsV5_gzgWzsUCnf_izMAtzbnR9SvtemJnfqM@mail.gmail.com>

On Wed, Jun 2, 2010 at 8:36 AM, Gmail <arnodel at googlemail.com> wrote:

>
> On 1 Jun 2010, at 18:36, cool-RR wrote:
>
> Hello,
>
> I would like to raise an issue here that I've been discussing at
> python-porting.
>
> (And I'd like to preface by saying that I'm not intimately familiar with
> Python's innards, so if I make any mistakes please correct me.)
>
> In Python 2.x there was an "unbound method" type. An unbound method would
> have an attribute `.im_class` that would refer to the class on which the
> method was defined. This allowed users to use the `copy_reg` module to
> pickle unbound methods by name. (In a similar way to how functions and
> classes are pickled by default.)
>
>
> Not exactly (python 2.6):
>
> >>> class Foo(object):
> ...    def f(self): pass
> ...
> >>> Foo.f
> <unbound method Foo.f>
> >>> Foo.f.im_class
> <class '__main__.Foo'>
> >>> class Bar(Foo): pass
> ...
> >>> bar.f
> <unbound method Bar.f>
> >>> Bar.f.im_class
> <class '__main__.Bar'>
>
>
> In Python 3.x unbound methods are plain functions. There is no way of
> knowing on which class they are defined, so therefore it's impossible to
> pickle them. It is even impossible to tell `copyreg` to use a custom
> reducer:
>
> http://stackoverflow.com/questions/2932742/python-using-copyreg-to-define-reducers-for-types-that-already-have-reducers
>
> (To the people who wonder why would anyone want to pickle unbound methods:
> I know that it sounds like a weird thing to do. Keep in mind that sometimes
> your objects need to get pickled. For example if you're using the
> multiprocessing module, and you pass into it an object that somehow refers
> to an unbound method, then that method has to be picklable.)
>
> The idea is: Let's give unbound methods an attribute that will refer to the
> class on which they were defined.
>
> What do you think?
>
>
> Unbound methods in Python 2.X were objects that were created on class
> attribute access, not when the class was created, so what you are asking for
> is different from what Python 2.X provided.  Here is a very simplified way
> to mimic 2.X in 3.X via metaclasses (Python 3.2):
>
> >>> class FooType(type):
> ...     def __getattribute__(self, attrname):
> ...         attr = super().__dict__[attrname]
> ...         if isinstance(attr, type(lambda:0)):
> ...             return ("unbound method", self, attr)
> ...         else:
> ...             return attr
> ...
> >>> class Foo(metaclass=FooType):
> ...     def f(self):pass
> ...
> >>> Foo.f
> ('unbound method', <class '__main__.Foo'>, <function f at 0x445fa8>)
> >>> Foo().f()
> >>>
>
> What you want maybe instead is a metaclass that overrides type.__new__ or
> type.__init__ so that each function in the attributes of the class is
> wrapped in some kind of wrapper like this:
>
> class DefinedIn:
>     def __init__(self, f, classdef):
>         self.classdef = classdef
>         self.f = f
>     def __call__(self, *args, **kwargs):
>         return self.f(*args, **kwargs)
>
> --
> Arnaud
>
>
Thanks for the corrections and the metaclass, Arnaud. (And thanks to you
too, Terry.) I might use it in my project.

> so what you are asking for is different from what Python 2.X provided.

Yes, I have been imprecise. So I'll correct my idea: I want Python 3.x to
tell me the class from which the unbound method was accessed. (It can be
done either on creation or or access, whatever seems better to you.) So I
propose this as a modification of Python.


Ram.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100602/f77796ef/attachment.html>

From mal at egenix.com  Wed Jun  2 12:13:00 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Wed, 02 Jun 2010 12:13:00 +0200
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTik4LlDnn71hnGasmsZnszUguGq6WhNrGgn0pNUQ@mail.gmail.com>
References: <AANLkTin_CD2aZ2xlgo4Uh_MPYQ7ajFMqOwt-L6gOR_nR@mail.gmail.com>	<AANLkTikbgeXYippkWCP8O3ULPnAd9HoOnCSfYIwWdla-@mail.gmail.com>
	<AANLkTik4LlDnn71hnGasmsZnszUguGq6WhNrGgn0pNUQ@mail.gmail.com>
Message-ID: <4C062EAC.60302@egenix.com>

While I played with this idea a long time ago as well, I have
since found that it causes more trouble than it's worth.

Apart from having the user to maintain at least two different
versioned packages (Python and (part of) the stdlib), it also
causes problems if you use this Python installation for more
than one project: it's easily possible to have project A require
version 2 or a stdlib module and project B version 3 of that
same module.

If you then load both projects in an application, you end up
either with a broken project A or B (depending on whether you have
version 2 or 3 of that stdlib module installed), or you allow
loading multiple versions of the same module, in which case you
will likely break you application, since it will find multiple
class implementations (and objects) for the the same instances.

Things like exception catching, pickling (and esp. unpickling),
security checks based on classes, interface adapters and even
simply isinstance() checks would then fail in various hard to
reproduce ways.

IMHO, we've so far done well by issuing new Python patch level
releases whenever there was a problem in the stdlib (and only
then).

Introducing new features by way of updates is left to
minor releases, which then require more testing by the
user.

This additional testing is what causes many corporates to
not follow the Python release cycle or skip a few minor
releases: the work involved often just doesn't warrant the
advantages of the added new features.

The situations won't get any better if we start releasing
partial or complete stdlib updates even more often.

If users really want bleeding edge, they can just use the
SVN version of the stdlib or cherry pick updates to module
or packages they care about from SVN.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 02 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                46 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From dickinsm at gmail.com  Wed Jun  2 12:50:11 2010
From: dickinsm at gmail.com (Mark Dickinson)
Date: Wed, 2 Jun 2010 11:50:11 +0100
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTimTf-kzzgd2ll8yKrhEL-ytPtPtr7PX4pfsyvbP@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com>
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>
	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com>
	<AANLkTikMlxL-JNM8WOO43xanPjNICN91iRZOFDrOKyJo@mail.gmail.com>
	<AANLkTikNcigdM2bPmxZFQs8lPB7yotEhLOPqzyzAFknA@mail.gmail.com>
	<AANLkTimHmZOjXIKVeH4laoydrMYMEtolRmsBCy9SMkai@mail.gmail.com>
	<AANLkTimTf-kzzgd2ll8yKrhEL-ytPtPtr7PX4pfsyvbP@mail.gmail.com>
Message-ID: <AANLkTim7x-1AXQhdyFymXTW-R0ckOkqSUD3z8ntTkeOr@mail.gmail.com>

On Tue, Jun 1, 2010 at 11:40 PM, Guido van Rossum <guido at python.org> wrote:
> Also note that there will be no validation possible for future
> datetimes (and for past dates it would require an up-to-date leap
> second database).

It's even worse than that :(.  Complete validation would also require
timezone knowledge, because leap seconds happen at the same instant
the world over:  e.g., the leap second that occurred at 23:59:60 UTC
on 31st December 2008 occurred at 19:29:60 local time in Caracas.   So
for naive datetime objects validation is going to be difficult.  Given
that timezone offsets can be an arbitrary number of minutes, the only
reasonable options as far as I can see would be either *always* to
accept seconds in the range 0-60, or *always* restrict the range to
0-59, as now.  (Well, you could only accept seconds=60 for timestamps
within 24 hours of midnight on Jun 30 or Dec 31st, but that's fairly
horrible. :-)

I'm still not convinced that incorrectly accepting some invalid UTC
times is worse than incorrectly rejecting some (rare) valid UTC times,
but I'll let it drop for now.

-- 
Mark


From dickinsm at gmail.com  Wed Jun  2 12:55:50 2010
From: dickinsm at gmail.com (Mark Dickinson)
Date: Wed, 2 Jun 2010 11:55:50 +0100
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTim7x-1AXQhdyFymXTW-R0ckOkqSUD3z8ntTkeOr@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com>
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>
	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com>
	<AANLkTikMlxL-JNM8WOO43xanPjNICN91iRZOFDrOKyJo@mail.gmail.com>
	<AANLkTikNcigdM2bPmxZFQs8lPB7yotEhLOPqzyzAFknA@mail.gmail.com>
	<AANLkTimHmZOjXIKVeH4laoydrMYMEtolRmsBCy9SMkai@mail.gmail.com>
	<AANLkTimTf-kzzgd2ll8yKrhEL-ytPtPtr7PX4pfsyvbP@mail.gmail.com>
	<AANLkTim7x-1AXQhdyFymXTW-R0ckOkqSUD3z8ntTkeOr@mail.gmail.com>
Message-ID: <AANLkTimBrg58vKXGhHWifojn0zYRSk_22uGzI1rM23yF@mail.gmail.com>

On Wed, Jun 2, 2010 at 11:50 AM, Mark Dickinson <dickinsm at gmail.com> wrote:
> within 24 hours of midnight on Jun 30 or Dec 31st, but that's fairly

To avoid ambiguity, that should read "within 24 hours of 24:00:00 on
Jun 30th or Dec 31st", of course. :)

--
Mark


From mal at egenix.com  Wed Jun  2 13:18:30 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Wed, 02 Jun 2010 13:18:30 +0200
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTim7x-1AXQhdyFymXTW-R0ckOkqSUD3z8ntTkeOr@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com>	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com>	<AANLkTikMlxL-JNM8WOO43xanPjNICN91iRZOFDrOKyJo@mail.gmail.com>	<AANLkTikNcigdM2bPmxZFQs8lPB7yotEhLOPqzyzAFknA@mail.gmail.com>	<AANLkTimHmZOjXIKVeH4laoydrMYMEtolRmsBCy9SMkai@mail.gmail.com>	<AANLkTimTf-kzzgd2ll8yKrhEL-ytPtPtr7PX4pfsyvbP@mail.gmail.com>
	<AANLkTim7x-1AXQhdyFymXTW-R0ckOkqSUD3z8ntTkeOr@mail.gmail.com>
Message-ID: <4C063E06.7050606@egenix.com>

Mark Dickinson wrote:
> On Tue, Jun 1, 2010 at 11:40 PM, Guido van Rossum <guido at python.org> wrote:
>> Also note that there will be no validation possible for future
>> datetimes (and for past dates it would require an up-to-date leap
>> second database).
> 
> It's even worse than that :(.  Complete validation would also require
> timezone knowledge, because leap seconds happen at the same instant
> the world over:  e.g., the leap second that occurred at 23:59:60 UTC
> on 31st December 2008 occurred at 19:29:60 local time in Caracas.   So
> for naive datetime objects validation is going to be difficult.  Given
> that timezone offsets can be an arbitrary number of minutes, the only
> reasonable options as far as I can see would be either *always* to
> accept seconds in the range 0-60, or *always* restrict the range to
> 0-59, as now.  (Well, you could only accept seconds=60 for timestamps
> within 24 hours of midnight on Jun 30 or Dec 31st, but that's fairly
> horrible. :-)
>
> I'm still not convinced that incorrectly accepting some invalid UTC
> times is worse than incorrectly rejecting some (rare) valid UTC times,
> but I'll let it drop for now.

You can use mxDateTime to store such values. I added support for
storing leap seconds long ago, but only for the UTC variants,
not for arbitrary time zones. This was mainly done to support
those values when using mxDateTime as storage container rather
than for calculations (those are all POSIX conform, ie. omit
leap seconds).

Note that most C libs nowadays only support the POSIX
interpretation of time_t values. Those don't include leap seconds:

>>> DateTime(1986,12,31,23,59,59).gmticks()
536457599.0
>>> DateTime(1986,12,31,23,59,60).gmticks()
536457600.0
>>> DateTime(1987,1,1,0,0,0).gmticks()
536457600.0

with leap seconds, you'd get 536457612 for
DateTime(1986,12,31,23,59,59).gmticks().

As a result, conversion to time_t will be lossy.

IIRC, the BSDs were the last to switch off leap second support,
but could be mistaken.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 02 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                46 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From ncoghlan at gmail.com  Wed Jun  2 14:45:10 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 02 Jun 2010 22:45:10 +1000
Subject: [Python-ideas] Date/time literals
In-Reply-To: <AANLkTilRp-VZ6gQir4tVbXNGcZ-jWBHPBikkTDTyQSps@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com>	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com>	<AANLkTilv-hVCPKPpkZvJKRkBVeWnxHW2_Csodonet4f4@mail.gmail.com>	<AANLkTikLZ-UnYtRywbrzppMbtHPumxVqjLv-30KIwY4G@mail.gmail.com>	<AANLkTimrElq63LgFaDyRGqyKVwN8SrU-eSLONSSOy_KR@mail.gmail.com>
	<AANLkTilRp-VZ6gQir4tVbXNGcZ-jWBHPBikkTDTyQSps@mail.gmail.com>
Message-ID: <4C065256.6070100@gmail.com>

On 02/06/10 03:28, Alexander Belopolsky wrote:
> Developers writing generic libraries have to deal with imagined use
> cases all the time.  If I write an rfc3339 timestamp parser, I cannot
> ignore the fact that XXXX-12-31T23:59:60Z is a valid timestamp.  If I
> do, I cannot claim that my parser implements rfc3339.  An application
> that uses python datetime objects to represent time may crash parsing
> logs produced in December 2008 on the systems that keeps time in UTC.
>
> If all my application does is to read timestamps from some source,
> store them in the database and display them on a later date, I don't
> want to worry that it will crash when presented with 23:59:60.
>
> Of course, allowing leap seconds in time/datetime constructor may be a
> way to delay detection of a bug.  An application may accept
> XXXX-12-31T23:59:60Z, but later rely on the fact that dt1-dt2 ==
> timedelta(0) implies dt1 == dt2.   Such issues, if exist, can be
> addressed by the application without replacing datetime object as a
> means of storing timestamps.  On the other hand the current
> restriction in the constructor makes datetime fundamentally
> incompatible with a number of standards.

The case for allowing a "60" value for seconds in the datetime 
constructor seems reasonable to me (i.e. prevent leap seconds from 
breaking date parsing), but I don't see the use case for delaying 
normalisation to a valid POSIX time.

If the constructor just converts the 60 to a zero and adds 1 minute 
immediately, then the chance of subtle breakages would be minimal and 
the current ValueError would be replaced by a far more graceful behaviour.

(Allowing 2400 hours just seems plain odd to me, but if it was adopted 
I'd suggest immediate normalisation be similarly applied in that case as 
well).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------


From guido at python.org  Wed Jun  2 15:32:44 2010
From: guido at python.org (Guido van Rossum)
Date: Wed, 2 Jun 2010 06:32:44 -0700
Subject: [Python-ideas] Date/time literals
In-Reply-To: <4C065256.6070100@gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com> 
	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com> 
	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>
	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com> 
	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com> 
	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com> 
	<AANLkTilv-hVCPKPpkZvJKRkBVeWnxHW2_Csodonet4f4@mail.gmail.com> 
	<AANLkTikLZ-UnYtRywbrzppMbtHPumxVqjLv-30KIwY4G@mail.gmail.com> 
	<AANLkTimrElq63LgFaDyRGqyKVwN8SrU-eSLONSSOy_KR@mail.gmail.com> 
	<AANLkTilRp-VZ6gQir4tVbXNGcZ-jWBHPBikkTDTyQSps@mail.gmail.com> 
	<4C065256.6070100@gmail.com>
Message-ID: <AANLkTiksDXDKLsiX7daBR_uUHenXahs5lofNTXv_Lbmf@mail.gmail.com>

On Wed, Jun 2, 2010 at 5:45 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On 02/06/10 03:28, Alexander Belopolsky wrote:
>>
>> Developers writing generic libraries have to deal with imagined use
>> cases all the time. ?If I write an rfc3339 timestamp parser, I cannot
>> ignore the fact that XXXX-12-31T23:59:60Z is a valid timestamp. ?If I
>> do, I cannot claim that my parser implements rfc3339. ?An application
>> that uses python datetime objects to represent time may crash parsing
>> logs produced in December 2008 on the systems that keeps time in UTC.
>>
>> If all my application does is to read timestamps from some source,
>> store them in the database and display them on a later date, I don't
>> want to worry that it will crash when presented with 23:59:60.
>>
>> Of course, allowing leap seconds in time/datetime constructor may be a
>> way to delay detection of a bug. ?An application may accept
>> XXXX-12-31T23:59:60Z, but later rely on the fact that dt1-dt2 ==
>> timedelta(0) implies dt1 == dt2. ? Such issues, if exist, can be
>> addressed by the application without replacing datetime object as a
>> means of storing timestamps. ?On the other hand the current
>> restriction in the constructor makes datetime fundamentally
>> incompatible with a number of standards.
>
> The case for allowing a "60" value for seconds in the datetime constructor
> seems reasonable to me (i.e. prevent leap seconds from breaking date
> parsing), but I don't see the use case for delaying normalisation to a valid
> POSIX time.
>
> If the constructor just converts the 60 to a zero and adds 1 minute
> immediately, then the chance of subtle breakages would be minimal and the
> current ValueError would be replaced by a far more graceful behaviour.
>
> (Allowing 2400 hours just seems plain odd to me, but if it was adopted I'd
> suggest immediate normalisation be similarly applied in that case as well).

I'd be okay with immediate normalization in both of these cases as
well. Immediate normalization cuts off all concerns about unnormalized
datetimes, consistent comparisons, etc.

-- 
--Guido van Rossum (python.org/~guido)


From eric at trueblade.com  Wed Jun  2 16:21:47 2010
From: eric at trueblade.com (Eric Smith)
Date: Wed, 02 Jun 2010 10:21:47 -0400
Subject: [Python-ideas] Date/time literals
In-Reply-To: <4C065256.6070100@gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>	<AANLkTikrRUh_PASlTRFQuUJCaDA98SRXkt5qQY7Rpft7@mail.gmail.com>	<AE9FEEDF-4747-4A30-9E40-93195140B259@masklinn.net>	<AANLkTimpVpZqmZoLRjYlSmpYmueVCIljFrCx2qj6gylD@mail.gmail.com>	<AANLkTimTSNdmNajL4VjbqZiA_g3cj2H2NHEZBALW14nj@mail.gmail.com>	<AANLkTilCg-orxw-wHjkpC3LZXjZs76ptvci49WRWaanc@mail.gmail.com>	<AANLkTilv-hVCPKPpkZvJKRkBVeWnxHW2_Csodonet4f4@mail.gmail.com>	<AANLkTikLZ-UnYtRywbrzppMbtHPumxVqjLv-30KIwY4G@mail.gmail.com>	<AANLkTimrElq63LgFaDyRGqyKVwN8SrU-eSLONSSOy_KR@mail.gmail.com>	<AANLkTilRp-VZ6gQir4tVbXNGcZ-jWBHPBikkTDTyQSps@mail.gmail.com>
	<4C065256.6070100@gmail.com>
Message-ID: <4C0668FB.5020200@trueblade.com>

Nick Coghlan wrote:

> The case for allowing a "60" value for seconds in the datetime 
> constructor seems reasonable to me (i.e. prevent leap seconds from 
> breaking date parsing), but I don't see the use case for delaying 
> normalisation to a valid POSIX time.
> 
> If the constructor just converts the 60 to a zero and adds 1 minute 
> immediately, then the chance of subtle breakages would be minimal and 
> the current ValueError would be replaced by a far more graceful behaviour.

I think this is the best we can do and not get sucked into supporting 
leap seconds.

> (Allowing 2400 hours just seems plain odd to me, but if it was adopted 
> I'd suggest immediate normalisation be similarly applied in that case as 
> well).

I'm not as concerned about this, but I've had occasions where it would 
have been handy.

-- 
Eric.


From ianb at colorstudy.com  Wed Jun  2 17:03:30 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Wed, 2 Jun 2010 10:03:30 -0500
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <4C062EAC.60302@egenix.com>
References: <AANLkTin_CD2aZ2xlgo4Uh_MPYQ7ajFMqOwt-L6gOR_nR@mail.gmail.com> 
	<AANLkTikbgeXYippkWCP8O3ULPnAd9HoOnCSfYIwWdla-@mail.gmail.com> 
	<AANLkTik4LlDnn71hnGasmsZnszUguGq6WhNrGgn0pNUQ@mail.gmail.com> 
	<4C062EAC.60302@egenix.com>
Message-ID: <AANLkTikzROznAwgL4xPi-oiZa6LXLMYHCmQBRzuauTGP@mail.gmail.com>

On Wed, Jun 2, 2010 at 5:13 AM, M.-A. Lemburg <mal at egenix.com> wrote:

> While I played with this idea a long time ago as well, I have
> since found that it causes more trouble than it's worth.
>
> Apart from having the user to maintain at least two different
> versioned packages (Python and (part of) the stdlib), it also
> causes problems if you use this Python installation for more
> than one project: it's easily possible to have project A require
> version 2 or a stdlib module and project B version 3 of that
> same module.
>

This exists for normal libraries currently, and using virtualenv I've found
it to be manageable.  It does require process separation (and sys.path
separation) in some cases.

I agree that global upgrades are dangerous.  distutils2/pip may be different
because projects don't generally get used except when managing a project,
and very few projects will require any particular version of these
libraries.

If you then load both projects in an application, you end up
> either with a broken project A or B (depending on whether you have
> version 2 or 3 of that stdlib module installed), or you allow
> loading multiple versions of the same module, in which case you
> will likely break you application, since it will find multiple
> class implementations (and objects) for the the same instances.
>
> Things like exception catching, pickling (and esp. unpickling),
> security checks based on classes, interface adapters and even
> simply isinstance() checks would then fail in various hard to
> reproduce ways.
>

Yes, multiple versions of a library loaded at the same time is not a good
idea.


> IMHO, we've so far done well by issuing new Python patch level
> releases whenever there was a problem in the stdlib (and only
> then).
>
> Introducing new features by way of updates is left to
> minor releases, which then require more testing by the
> user.
>
> This additional testing is what causes many corporates to
> not follow the Python release cycle or skip a few minor
> releases: the work involved often just doesn't warrant the
> advantages of the added new features.
>

Yes, and so applications and libraries have to work around bugs instead of
using fixed versions, generally making upgrades even more danger-prone.  In
the case of package management, the hardest libraries to support are those
libraries that have included a large number of fixes for installation
problems in their setup.py.

Futzing around with most of the standard library right now would just add
complexity, and applying changes that might be more aesthetic than
functional would be a really bad choice and lead to tedious discussions.
But new functionality can't usefully *just* exist in the standard library
because basically no one is using 2.7, few people are using 3.x, and lots of
people are using 2.5 or at best 2.6 -- so new functionality should be
available to all those people.  Which means there *has* to be releases of
any new functionality.  argparse was already released, and so there will be
"argparse" out in the wild that anyone can install on any version of Python
shadowing the existing module.  unittest improvements are being released as
unittest2... meaning I guess that the "proper" way to use that functionality
would be:

import sys
if sys.version_info >= (2, 7):
    import unittest
else:
    import unittest2 as unittest


> The situations won't get any better if we start releasing
> partial or complete stdlib updates even more often.
>

That a stdlib release means potentially *any* part of the standard library
could have been upgraded (even though it probably won't be) probably will
throw people off.

The advantage of versions on specific functionality is that you can upgrade
just what you care about.  It's much less burdensome to test something that
actually fixes a problem for you, and of course people do that all the time
with non-standard libraries.

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100602/87eeee08/attachment.html>

From mal at egenix.com  Wed Jun  2 18:35:39 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Wed, 02 Jun 2010 18:35:39 +0200
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTikzROznAwgL4xPi-oiZa6LXLMYHCmQBRzuauTGP@mail.gmail.com>
References: <AANLkTin_CD2aZ2xlgo4Uh_MPYQ7ajFMqOwt-L6gOR_nR@mail.gmail.com>
	<AANLkTikbgeXYippkWCP8O3ULPnAd9HoOnCSfYIwWdla-@mail.gmail.com>
	<AANLkTik4LlDnn71hnGasmsZnszUguGq6WhNrGgn0pNUQ@mail.gmail.com>
	<4C062EAC.60302@egenix.com>
	<AANLkTikzROznAwgL4xPi-oiZa6LXLMYHCmQBRzuauTGP@mail.gmail.com>
Message-ID: <4C06885B.9000505@egenix.com>

Ian Bicking wrote:
>> IMHO, we've so far done well by issuing new Python patch level
>> releases whenever there was a problem in the stdlib (and only
>> then).
>>
>> Introducing new features by way of updates is left to
>> minor releases, which then require more testing by the
>> user.
>>
>> This additional testing is what causes many corporates to
>> not follow the Python release cycle or skip a few minor
>> releases: the work involved often just doesn't warrant the
>> advantages of the added new features.
>>
> 
> Yes, and so applications and libraries have to work around bugs instead of
> using fixed versions, generally making upgrades even more danger-prone.  In
> the case of package management, the hardest libraries to support are those
> libraries that have included a large number of fixes for installation
> problems in their setup.py.

True, but at least you know which work-arounds to remove
in case you upgrade to a new stdlib version which fixes those
problems.

Users put a lot of trust into the stdlib reliability as a single
package and are well aware of the fact that using many 3rd party
packages put them at risk due to sometimes missing interoperability
checks of those packages (which, of course, is hard for the 3rd
package authors to do, since they can hardly know what combination
a particular is using).

> Futzing around with most of the standard library right now would just add
> complexity, and applying changes that might be more aesthetic than
> functional would be a really bad choice and lead to tedious discussions.

Agreed.

> But new functionality can't usefully *just* exist in the standard library
> because basically no one is using 2.7, few people are using 3.x, and lots of
> people are using 2.5 or at best 2.6 -- so new functionality should be
> available to all those people.  Which means there *has* to be releases of
> any new functionality.  argparse was already released, and so there will be
> "argparse" out in the wild that anyone can install on any version of Python
> shadowing the existing module.  unittest improvements are being released as
> unittest2... meaning I guess that the "proper" way to use that functionality
> would be:
> 
> import sys
> if sys.version_info >= (2, 7):
>     import unittest
> else:
>     import unittest2 as unittest

True and that's how most code bases I've seen work: they start
with a 3rd party package and then revert to stdlib one once
it's integrated.

>> The situations won't get any better if we start releasing
>> partial or complete stdlib updates even more often.
> 
> That a stdlib release means potentially *any* part of the standard library
> could have been upgraded (even though it probably won't be) probably will
> throw people off.

I'm not sure about that one. Users I've worked with typically trust the
interoperability of the stdlib a lot more than that of a set of 3rd party
packages and even though there may be lots of different changes in the stdlib,
the package as a whole is assumed to be more robust and better tested than
the average set of 3prd party packages (even though this may not be
a true assumption).

> The advantage of versions on specific functionality is that you can upgrade
> just what you care about.  It's much less burdensome to test something that
> actually fixes a problem for you, and of course people do that all the time
> with non-standard libraries.

True, that's why people add work-arounds or specific fixes
for the stdlib to their setup.py ... so that they can (conditionally)
remove them again, when the next Python version is released.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 02 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                46 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From raymond.hettinger at gmail.com  Wed Jun  2 18:37:07 2010
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Wed, 2 Jun 2010 09:37:07 -0700
Subject: [Python-ideas] An identity dict
In-Reply-To: <loom.20100601T230120-288@post.gmane.org>
References: <loom.20100530T052013-34@post.gmane.org>
	<htta0r$9l5$1@dough.gmane.org>
	<loom.20100601T043013-773@post.gmane.org>
	<7AC8DB63-DAD6-46EA-89B1-AA339E4D7B43@gmail.com>
	<loom.20100601T230120-288@post.gmane.org>
Message-ID: <DE7CC8B0-B7A9-499F-ABDE-E8D2961DCE5A@gmail.com>


>> * In the examples you posted (such
> as http://codespeak.net/svn/pypy/trunk/pypy/tool/algo/graphlib.py ),
>> it appears that PyPy already has an identity dict,  so how are they helped by
> adding one to the collections module?
> 
> My purpose with those examples was to prove it as a generally useful utility.
> 
>> 
>> * Most of the posted examples already work with regular dicts (which check
> identity before they check equality) -- don't the other implementations already
> implement regular dicts which need to have identity-implied-equality in order to
> pass the test suite?  I would expect the following snippet to work under all
> versions and implementations of Python:
>> 
>> 
>>     >>> class A: 
>>     ...         pass
>>     >>> a = A()
>>     >>> d = {a: 10}
>>     >>> assert d[a] == 10   # uses a's identity for lookup
> 
> Yes, but that would be different if you have two "a"s with __eq__ defined to be
> equal and you want to hash them separately.

None of the presented examples take advantage of that property.
All of them work with regular dictionaries.   This proposal is still
use case challenged.

AFAICT from code searches, the idea of needing to override
an existing __eq__ with an identity-only comparison seems
to never come up.  It would not even be popular as an ASPN recipe.

Moreover, I think that including it in the standard library would be harmful.
The language makes very few guarantees about object identity.
In most cases a user would far better off using a regular dictionary.
If a rare case arose where __eq__ needed to be overridden with an
identity-only check, it is not hard to write d[id(obj)]=value.  

Strong -1 on including this in the standard library.


Raymond


P.S.  ISTM that including subtly different variations of a data type
does more harm than good.   Understanding how to use an
identity dictionary correctly requires understanding the nuances
of object identity, how to keep the object alive outside the dictionary
(even if the dictionary keeps it alive, a user still needs an external reference
to be able to do a lookup), and knowing that the version proposed for
CPython has dramatically worse speed/space performance than
a regular dictionary.  The very existence of an identity dictionary in
collections is likely to distract a user away from a better solution using:
d[id(obj)]=value.

From benjamin at python.org  Wed Jun  2 19:38:46 2010
From: benjamin at python.org (Benjamin Peterson)
Date: Wed, 2 Jun 2010 17:38:46 +0000 (UTC)
Subject: [Python-ideas] An identity dict
References: <loom.20100530T052013-34@post.gmane.org>
	<htta0r$9l5$1@dough.gmane.org>
	<loom.20100601T043013-773@post.gmane.org>
	<7AC8DB63-DAD6-46EA-89B1-AA339E4D7B43@gmail.com>
	<loom.20100601T230120-288@post.gmane.org>
	<DE7CC8B0-B7A9-499F-ABDE-E8D2961DCE5A@gmail.com>
Message-ID: <loom.20100602T191755-962@post.gmane.org>

Raymond Hettinger <raymond.hettinger at ...> writes:
> None of the presented examples take advantage of that property.
> All of them work with regular dictionaries.   This proposal is still
> use case challenged.

Besides that ASPN recipe of mine I mentioned before [1], here are some
more examples:

- copy.deepcopy() and pickle use it for an object memo.

- keeping track of protocol versions in Twisted. [2]

- memo in a serialization protocol. [3]


[1]
http://code.activestate.com/recipes/577242-calling-c-level-finalizers-
without-__del__/
[2] http://twistedmatrix.com/trac/browser/trunk/twisted/persisted/
styles.py
[3] http://twistedmatrix.com/trac/browser/trunk/twisted/spread/jelly.py

> 
> AFAICT from code searches, the idea of needing to override
> an existing __eq__ with an identity-only comparison seems
> to never come up.  It would not even be popular as an ASPN recipe.
> 
> Moreover, I think that including it in the standard library would be
> harmful.
> The language makes very few guarantees about object identity.
> In most cases a user would far better off using a regular dictionary.
> If a rare case arose where __eq__ needed to be overridden with an
> identity-only check, it is not hard to write d[id(obj)]=value.  

On the other hand, d[id(obj)] can be dangerous and incorrect on CPython:

>>> d = {}
>>> d[id([])] = 10
>>> d[id([])]
10

> 
> Strong -1 on including this in the standard library.

How do you feel about Antoine's keyfuncdict proposal?

> 
> Raymond
> 
> P.S.  ISTM that including subtly different variations of a data type
> does more harm than good.   Understanding how to use an
> identity dictionary correctly requires understanding the nuances
> of object identity,

We're all adults here. We provide WeakKeyedDict and friends even though
they rely on unpredictable and subtle garbage collection.

> how to keep the object alive outside the dictionary
> (even if the dictionary keeps it alive, a user still needs an
> external reference
> to be able to do a lookup), and knowing that the version proposed for
> CPython has dramatically worse speed/space performance than
> a regular dictionary.  The very existence of an identity dictionary in
> collections is likely to distract a user away from a better solution
> using: d[id(obj)]=value.

I would argue that that's not a better solution given the above example.
Anyone using id(obj) would have understand the nuances of object identity
perhaps more than a real identity dictionary.






From benjamin at python.org  Wed Jun  2 19:42:38 2010
From: benjamin at python.org (Benjamin Peterson)
Date: Wed, 2 Jun 2010 17:42:38 +0000 (UTC)
Subject: [Python-ideas] An identity dict
References: <loom.20100530T052013-34@post.gmane.org>
	<htta0r$9l5$1@dough.gmane.org>
	<loom.20100530T162050-351@post.gmane.org>
	<4C030321.4050803@canterbury.ac.nz>
	<89E5DB78-B304-4A9F-B140-96888B2FCCC7@gmail.com>
Message-ID: <loom.20100602T194137-968@post.gmane.org>

Raymond Hettinger <raymond.hettinger at ...> writes:
> Also, there hasn't been much discussion of implementation,
> but unless you're willing to copy and paste most of the
> code in dictobject.c, you're going to end-up with something
> much slower than d[id(obj)]=value.

A slightly hacky way to implement this on CPython without copying much would be
to implement __getitem__, and __setitem__ in C and subclass it in Python to call
those methods in the rest of dictionary implementation.






From raymond.hettinger at gmail.com  Wed Jun  2 19:58:50 2010
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Wed, 2 Jun 2010 10:58:50 -0700
Subject: [Python-ideas] An identity dict
In-Reply-To: <loom.20100602T191755-962@post.gmane.org>
References: <loom.20100530T052013-34@post.gmane.org>
	<htta0r$9l5$1@dough.gmane.org>
	<loom.20100601T043013-773@post.gmane.org>
	<7AC8DB63-DAD6-46EA-89B1-AA339E4D7B43@gmail.com>
	<loom.20100601T230120-288@post.gmane.org>
	<DE7CC8B0-B7A9-499F-ABDE-E8D2961DCE5A@gmail.com>
	<loom.20100602T191755-962@post.gmane.org>
Message-ID: <EFB97D3B-D267-41FD-95C2-0A7B4EB537E2@gmail.com>


On Jun 2, 2010, at 10:38 AM, Benjamin Peterson wrote:
>> 
>> Strong -1 on including this in the standard library.
> 
> How do you feel about Antoine's keyfuncdict proposal?

His proposal is more interesting because it is more general.
For example, a keyfuncdict may make it easy to implement
a case-insensitive dictionary.   I would like to see a concrete 
implementation to see what design choices it makes. 


Raymond
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100602/5792dbb6/attachment.html>

From raymond.hettinger at gmail.com  Wed Jun  2 20:25:13 2010
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Wed, 2 Jun 2010 11:25:13 -0700
Subject: [Python-ideas] keyfuncdict -- the good, the bad, and the ugly
Message-ID: <CE584604-49D3-4BA1-A86D-964B4A1D68F4@gmail.com>

Was thinking of all the things that could be done with Antoine's generalization:

The Good
========
d = keyfuncdict(key=str.lower)        # case insensitive dict

def track(obj):
    logging.info(obj)
    return obj
d = keyfuncdict(key=track)               # monitored dict

d = keyfuncdict(key=id)                          # makes benjamin happy


The Bad
=======

d = keyfuncdict(key=tuple)                    # lets you use lists as keys

d = keyfuncdict(key=repr)                     # support many kinds of mutable or unhashable keys

d = keyfuncdict(key=pickle.loads)       # use anything picklable as a key

d = keyfuncdict(key=getuser)               # track one most recent entry per user


The Ugly
========

d = keyfuncdict(key=random.random)           # just plain weird

d = keyfuncdict(key=itertools.count().next)   # all entries are unique and unretrievable ;-)

def remove(obj):
     d.pop(obj)
     return obj
d = keyfuncdict(key=remove)                          # self deleting dict ;-)


Raymond




From jimjjewett at gmail.com  Wed Jun  2 21:31:47 2010
From: jimjjewett at gmail.com (Jim Jewett)
Date: Wed, 2 Jun 2010 15:31:47 -0400
Subject: [Python-ideas] keyfuncdict -- the good, the bad, and the ugly
In-Reply-To: <CE584604-49D3-4BA1-A86D-964B4A1D68F4@gmail.com>
References: <CE584604-49D3-4BA1-A86D-964B4A1D68F4@gmail.com>
Message-ID: <AANLkTimjh6nxYoNY4Rb9wh4pGU_yLvgQsAnDbVBU-dxG@mail.gmail.com>

On Wed, Jun 2, 2010 at 2:25 PM, Raymond Hettinger
<raymond.hettinger at gmail.com> wrote:
> Was thinking of all the things that could be done with Antoine's generalization:

> The Good
> ========
> d = keyfuncdict(key=str.lower) ? ? ? ?# case insensitive dict

Will the keyfunc be an immutable attribute?  If so, this satisfies
some of the security proxy use cases too.  (Those might also want to
be able to transform the value -- perhaps on the way out.  But
allowing access to key, value, item, and in/out ... I'm not sure
exactly where to draw YAGNI.)


> The Bad
> =======

> d = keyfuncdict(key=tuple) ? ? ? ? ? ? ? ? ? ?# lets you use lists as keys

Obviously, this would be for a limited domain, to avoid tupling
numbers.  But given that ... why is this bad?

Is it just because this might be an attractive nuisance for people who
really ought to be using key=id?  (Same question for key in (repr,
pickle.loads))

> d = keyfuncdict(key=getuser) ? ? ? ? ? ? ? # track one most recent entry per user

Assuming a reasonble getuser, what is wrong with this?


> The Ugly
> ========

> d = keyfuncdict(key=random.random) ? ? ? ? ? # just plain weird

This reminds me of the gremlins testing tools, though I admit that I
can't quite come up with a good use.

> d = keyfuncdict(key=itertools.count().next) ? # all entries are unique and unretrievable ;-)

But you could still iterate over them, in creation order.  The
advantage over a list is unclear.

> def remove(obj):
> ? ? d.pop(obj)
> ? ? return obj
> d = keyfuncdict(key=remove) ? ? ? ? ? ? ? ? ? ? ? ? ?# self deleting dict ;-)

Again, probably most useful for testing ... is it actually harmful to
allow this?

-jJ


From brett at python.org  Wed Jun  2 23:28:55 2010
From: brett at python.org (Brett Cannon)
Date: Wed, 2 Jun 2010 14:28:55 -0700
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <20100602095351.60ba0f2c@pitrou.net>
References: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com> 
	<AANLkTikJ68fG0_om6Y6NYZ5SomnUrdkP_Vqpu8laeMPR@mail.gmail.com> 
	<20100602095351.60ba0f2c@pitrou.net>
Message-ID: <AANLkTinU5O69jZd9JhgxlxEJKCGtrVc4qEJUFoF7uYg7@mail.gmail.com>

On Wed, Jun 2, 2010 at 00:53, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Tue, 1 Jun 2010 18:22:38 -0700
> Brett Cannon <brett at python.org> wrote:
>>
>> One is that when new modules are accepted into the stdlib they are
>> flagged with a ExpermintalWarning
>
> Are you advocating this specific spelling?

Or ExperimentWarning.

>
>> Otherwise we shift to an annual release schedule, but alternate Python
>> versions have a language moratorium. That would mean only new language
>> features every two years, but a new stdlib annually.
>
> I think this has already been shot down by Guido (I think I was the one
> who asked last time :-)). Basically, even if you aren't adding new
> language features, you are still compelling people to upgrade to a new
> version with (very probably) slight compatibility annoyances.

Not surprised.


From grflanagan at gmail.com  Thu Jun  3 08:40:22 2010
From: grflanagan at gmail.com (Gerard Flanagan)
Date: Thu, 03 Jun 2010 07:40:22 +0100
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTin_CD2aZ2xlgo4Uh_MPYQ7ajFMqOwt-L6gOR_nR@mail.gmail.com>
References: <AANLkTin_CD2aZ2xlgo4Uh_MPYQ7ajFMqOwt-L6gOR_nR@mail.gmail.com>
Message-ID: <hu7iom$dam$1@dough.gmane.org>

Tarek Ziad? wrote:
> Hello,
> 
> That's not a new idea, but I'd like to throw it here again.
> 
> Some modules/packages in the stdlib are pretty isolated, which means
> that they could be upgraded with no
> harm, independently from the rest. For example the unittest package,
> or the email package.
> 
> Here's an idea:
> 
> 1 - add a version number in each package or module of the stdlib that
> is potentially upgradable
> 
> 2 - create standalone releases of these modules/packages at PyPI, in a
> restricted area 'stdlib upgrades'
>      that can be used only by core devs to upload new versions. Each
> release lists the precise
>      Python versions it's compatible with.
> 

Not a packaging expert, but I think in the context of a virtualenv this 
all makes sense. The ability to have a pip requirements file (for 
example) with

stdlib-email==2.6
stdlib-unittest==2.7

would be a useful flexibility in my view. Any given application or 
library will only exercise a certain subset of stdlib after all. Also it 
might give you more confidence to upgrade to a higher python if you had 
this flexibility.

Whether you wanted to incorporate this in the absence of a virtualenv is 
another question, I suppose.

> 4 - an upgraded package lands in a new specific site-packages
> directory and is loaded *before* the one in Lib
> 

For a quick test, I added a "prioritize_site_packages" function to a 
virtualenv's site.py, which just rearranged sys.path so that anything 
containing the string 'site-packages' was prior to anything else. Would 
this be sufficient in the general case?



From pjenvey at underboss.org  Thu Jun  3 21:02:44 2010
From: pjenvey at underboss.org (Philip Jenvey)
Date: Thu, 3 Jun 2010 12:02:44 -0700
Subject: [Python-ideas] An identity dict
In-Reply-To: <DE7CC8B0-B7A9-499F-ABDE-E8D2961DCE5A@gmail.com>
References: <loom.20100530T052013-34@post.gmane.org>
	<htta0r$9l5$1@dough.gmane.org>
	<loom.20100601T043013-773@post.gmane.org>
	<7AC8DB63-DAD6-46EA-89B1-AA339E4D7B43@gmail.com>
	<loom.20100601T230120-288@post.gmane.org>
	<DE7CC8B0-B7A9-499F-ABDE-E8D2961DCE5A@gmail.com>
Message-ID: <05D86C53-5804-4E3A-A339-4750451DEEEE@underboss.org>


On Jun 2, 2010, at 9:37 AM, Raymond Hettinger wrote:
> 
> Moreover, I think that including it in the standard library would be harmful.
> The language makes very few guarantees about object identity.
> In most cases a user would far better off using a regular dictionary.
> If a rare case arose where __eq__ needed to be overridden with an
> identity-only check, it is not hard to write d[id(obj)]=value.  
> 
> Strong -1 on including this in the standard library.
> 
> 
> P.S.  ISTM that including subtly different variations of a data type
> does more harm than good.   Understanding how to use an
> identity dictionary correctly requires understanding the nuances
> of object identity, how to keep the object alive outside the dictionary
> (even if the dictionary keeps it alive, a user still needs an external reference
> to be able to do a lookup), and knowing that the version proposed for
> CPython has dramatically worse speed/space performance than
> a regular dictionary.  The very existence of an identity dictionary in
> collections is likely to distract a user away from a better solution using:
> d[id(obj)]=value.

>> 
>> Essentially these are places where defined equality should not matter. 
>> 
> Essentially, these are cases where an identity dictionary isn't 
> necessary and would in-fact be worse performance-wise 
> in every implementation except for PyPy which can compile 
> the pure python code for indentity_dict.py. 


Using id() is a workaround but again, a potentially expensive one for platforms with moving GCs. Every object calling for an id() forces additional bookkeeping on their ends. This is only a better solution for CPython.

Whereas abstracting this out into an identitydict type gives all platforms the chance to provide their own optimized versions.

> Since instances have a default hash equal to the id and since
> identity-implies-equality for dictionary keys, we already have
> a dictionary that handles these cases.  You don't even
> have to type:  d[id(k)]=value, it would suffice to write:  d[k]=value.


No, the default hash backed by id is a CPython implementation detail.

Another use case is just the fact that Python allows you to completely change the semantics of __eq__ (and for good reason). Though this is rare, take SQLAlchemy's SQL expression DSL for example, that has it generate a where clause:

table.select(table.c.id == 4) # table.c.id == 4 returns a "<sql statement> where id == 4" object

I don't see how a platform like Jython can provide an optimized identitydict that avoids id() calls via keyfuncdict(key=id). The keys() of said dict would need to be actual results of id() calls.

I'm +1 on an identitydict as long as its CPython implementation doesn't provide worse performance than the id workaround.

--
Philip Jenvey

From raymond.hettinger at gmail.com  Thu Jun  3 21:43:10 2010
From: raymond.hettinger at gmail.com (Raymond Hettinger)
Date: Thu, 3 Jun 2010 12:43:10 -0700
Subject: [Python-ideas] An identity dict
In-Reply-To: <05D86C53-5804-4E3A-A339-4750451DEEEE@underboss.org>
References: <loom.20100530T052013-34@post.gmane.org>
	<htta0r$9l5$1@dough.gmane.org>
	<loom.20100601T043013-773@post.gmane.org>
	<7AC8DB63-DAD6-46EA-89B1-AA339E4D7B43@gmail.com>
	<loom.20100601T230120-288@post.gmane.org>
	<DE7CC8B0-B7A9-499F-ABDE-E8D2961DCE5A@gmail.com>
	<05D86C53-5804-4E3A-A339-4750451DEEEE@underboss.org>
Message-ID: <087642E9-29FA-44A6-B4F3-BEC42D1D3C22@gmail.com>


>> P.S.  ISTM that including subtly different variations of a data type
>> does more harm than good.   Understanding how to use an
>> identity dictionary correctly requires understanding the nuances
>> of object identity, how to keep the object alive outside the dictionary
>> (even if the dictionary keeps it alive, a user still needs an external reference
>> to be able to do a lookup), and knowing that the version proposed for
>> CPython has dramatically worse speed/space performance than
>> a regular dictionary.  The very existence of an identity dictionary in
>> collections is likely to distract a user away from a better solution using:
>> d[id(obj)]=value.
> 
>>> 
>>> Essentially these are places where defined equality should not matter. 
>>> 
>> Essentially, these are cases where an identity dictionary isn't 
>> necessary and would in-fact be worse performance-wise 
>> in every implementation except for PyPy which can compile 
>> the pure python code for indentity_dict.py. 
> 
> 
> Using id() is a workaround but again, a potentially expensive one for platforms with moving GCs. Every object calling for an id() forces additional bookkeeping on their ends. This is only a better solution for CPython.

To be clear, most the examples given so far work with regular dictionaries even without using id().  The exception was something system specific such as the pickling mechanism.

So what we're talking about is the comparatively rare case when an object has an __eq__ method and you want that method to be ignored.  For example, you have two tuples (3,5) and (3,5) which are equal but happen to be distinct in memory and your needs are:
* to treat the two equal objects as being distinct for some purpose
* to run faster than id() runs on non-CPython implementations
* don't care if the code is dog slow on CPython (i.e. slower than if you had used id())
* don't care that the two tuples being distinct is memory is not a guaranteed behavior across implementations (i.e. any implementation is free to make all equal tuples share the same id via interning)

FWIW, I spoke with Jim Baker about this yesterday and he believes that Jython has no need for an identity dict.


Raymond


P.S. If Antoine's keyfuncdict proposal gains traction, it would be possible for other implementations to create a fast special case for key=id. 

From solipsis at pitrou.net  Thu Jun  3 22:04:29 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Thu, 3 Jun 2010 22:04:29 +0200
Subject: [Python-ideas] An identity dict
References: <loom.20100530T052013-34@post.gmane.org>
	<htta0r$9l5$1@dough.gmane.org>
	<loom.20100601T043013-773@post.gmane.org>
	<7AC8DB63-DAD6-46EA-89B1-AA339E4D7B43@gmail.com>
	<loom.20100601T230120-288@post.gmane.org>
	<DE7CC8B0-B7A9-499F-ABDE-E8D2961DCE5A@gmail.com>
	<05D86C53-5804-4E3A-A339-4750451DEEEE@underboss.org>
Message-ID: <20100603220429.0e727428@pitrou.net>

On Thu, 3 Jun 2010 12:02:44 -0700
Philip Jenvey <pjenvey at underboss.org> wrote:
> 
> Using id() is a workaround but again, a potentially expensive one for platforms with moving
> GCs. Every object calling for an id() forces additional bookkeeping on their ends. This is
> only a better solution for CPython.

Well, CPython and all other implementations with a non-moving GC.

> Whereas abstracting this out into an identitydict type gives all platforms the chance to
> provide their own optimized versions.

That's really premature optimization.

Regards

Antoine.




From cool-rr at cool-rr.com  Fri Jun  4 01:17:46 2010
From: cool-rr at cool-rr.com (Ram Rachum)
Date: Thu, 3 Jun 2010 23:17:46 +0000 (UTC)
Subject: [Python-ideas] reiter: decorator to make generators reiterable
References: <4C02857B.2030502@gmx.net>
Message-ID: <loom.20100604T011428-696@post.gmane.org>

Mathias Panzenb?ck <grosser.meister.morti at ...> writes:

> 
> I think this decorator sould be included in itertools:
> 
> from functools import wraps
> 
> class ReIter(object):
> 
>     ...
> 
> Or is there already such a thing and I missed it?
> 
> 	-panzi

It looks pretty cool to me. I'll probably include it in my personal project, and 
it would be cool if it'll be added to itertools. (Along with many other things 
that should be added to itertools.)

Ram.





From lie.1296 at gmail.com  Fri Jun  4 22:19:32 2010
From: lie.1296 at gmail.com (Lie Ryan)
Date: Sat, 05 Jun 2010 06:19:32 +1000
Subject: [Python-ideas] lack of time zone support
In-Reply-To: <AANLkTilEf7dnxd6TqWhb0bhR3tNlZ5z6dUiLyoZam6aZ@mail.gmail.com>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
	<51245.1275417154@parc.com>
	<AANLkTilEf7dnxd6TqWhb0bhR3tNlZ5z6dUiLyoZam6aZ@mail.gmail.com>
Message-ID: <hubn8e$8nf$1@dough.gmane.org>

On 06/02/10 10:50, Brett Cannon wrote:
> On Tue, Jun 1, 2010 at 11:32, Bill Janssen <janssen at parc.com> wrote:
>> To me, the single most irritating problem with the Python support for
>> date/time is the lack of support for time-zone understanding.  This
>> breaks down into two major issues, %z and lack of a standard time-zone
>> table.
>>
>> First, let's say I have to parse a Medusa log file, which contains time
>> stamps in the form "DD/Mon/YYYY:HH:MM:SS [+|-]HHMM", e.g.
>> "31/May/2010:07:10:04 -0800".  What I'd like to write is
>>
>>  tm = time.mktime(time.strptime(timestamp, "%d/%b/%Y:%H:%M:%S %z"))
>>
>> which is what I'd do if I was writing in C.  But no!  The Python
>> _strptime module doesn't support "%z".  So instead, I have to pick the
>> timestamp apart and do things separately and remember that "-0800" isn't
>> octal, and also isn't the same as -800, and remember whether to add or
>> subtract it.  This seems insane.  So, IMO, support for %z should be
>> added to Lib/_strptime.py.  We need a patch.
>>
>> Secondly, we really need concrete subclasses of tzinfo, and some sort of
>> mapping.  Lots of people have spent lots of time trying to figure out
>> this cryptic hint in datetime: "The datetime module does not supply any
>> concrete subclasses of tzinfo."  I'm not sure whether pytz is the best
>> ideas, or what I use, the "zoneinfo" module from python-dateutil.  With
>> that, I still have to add the Windows timezone names, using the table at
>> http://unicode.org/repos/cldr/trunk/common/supplemental/windowsZones.xml,
>> because the code in python-dateutil only works with Windows timezone
>> names when running on Windows.
> 
> First of all, there will never be a timezone table in the stdlib,
> period. This has been brought up before and is always shot down
> because python-dev does not want to have to keep track of timezone
> changes. pytz and other modules fit that bill fine.

Has a module to pull timezone data from the OS been proposed before?



From alexander.belopolsky at gmail.com  Fri Jun  4 22:57:34 2010
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Fri, 4 Jun 2010 16:57:34 -0400
Subject: [Python-ideas] lack of time zone support
In-Reply-To: <hubn8e$8nf$1@dough.gmane.org>
References: <AANLkTim-r9dS1KOYwPUfWsqVUYzTuJEKiuYWe2Z1HU4o@mail.gmail.com>
	<51245.1275417154@parc.com>
	<AANLkTilEf7dnxd6TqWhb0bhR3tNlZ5z6dUiLyoZam6aZ@mail.gmail.com>
	<hubn8e$8nf$1@dough.gmane.org>
Message-ID: <AANLkTilKSRzATEsIUkMlBX5GnmCoh-U_HEdRcTZJZj1N@mail.gmail.com>

> Has a module to pull timezone data from the OS been proposed before?

Yes.

 "We really could use a module that accesses the entire timezone
database but that's even more platform specific." (Guido van Rossum,
2007-03-06, http://bugs.python.org/issue1647654#msg31139).


From george.sakkis at gmail.com  Sun Jun  6 14:30:11 2010
From: george.sakkis at gmail.com (George Sakkis)
Date: Sun, 6 Jun 2010 14:30:11 +0200
Subject: [Python-ideas] @setattr(obj, [name])
Message-ID: <AANLkTikYSlC8kBqfa_7RWg_9y9IQYkXL1BZ0QLcW4FsO@mail.gmail.com>

It would be nice if setattr() was extended to allow usage as a decorator:

class Foo(object):
    pass

@setattr(Foo)
def bar(self):
    print 'bar'

@setattr(Foo, 'baz')
def get_baz(self):
    print 'baz'

>>> Foo().bar()
bar
>>> Foo().baz()
baz

Here's a pure Python implementation:

_setattr = setattr
def setattr(obj, *args):
    if len(args) >= 2:
        return _setattr(obj, *args)
    return lambda f: _setattr(obj, args[0] if args else f.__name__, f) or f

Thoughts ?

George


From g.brandl at gmx.net  Sun Jun  6 15:26:11 2010
From: g.brandl at gmx.net (Georg Brandl)
Date: Sun, 06 Jun 2010 15:26:11 +0200
Subject: [Python-ideas] @setattr(obj, [name])
In-Reply-To: <AANLkTikYSlC8kBqfa_7RWg_9y9IQYkXL1BZ0QLcW4FsO@mail.gmail.com>
References: <AANLkTikYSlC8kBqfa_7RWg_9y9IQYkXL1BZ0QLcW4FsO@mail.gmail.com>
Message-ID: <hug7nr$ulf$1@dough.gmane.org>

Am 06.06.2010 14:30, schrieb George Sakkis:
> It would be nice if setattr() was extended to allow usage as a decorator:
> 
> class Foo(object):
>     pass
> 
> @setattr(Foo)
> def bar(self):
>     print 'bar'
> 
> @setattr(Foo, 'baz')
> def get_baz(self):
>     print 'baz'
> 
>>>> Foo().bar()
> bar
>>>> Foo().baz()
> baz
> 
> Here's a pure Python implementation:
> 
> _setattr = setattr
> def setattr(obj, *args):
>     if len(args) >= 2:
>         return _setattr(obj, *args)
>     return lambda f: _setattr(obj, args[0] if args else f.__name__, f) or f
> 
> Thoughts ?

Since this is useful for functions only, I would not try to overload a
simple builtin, call it def_on() and put it in my utility module:

@def_on(Foo)
def method(self): pass

Georg

-- 
Thus spake the Lord: Thou shalt indent with four spaces. No more, no less.
Four shall be the number of spaces thou shalt indent, and the number of thy
indenting shall be four. Eight shalt thou not indent, nor either indent thou
two, excepting that thou then proceed to four. Tabs are right out.



From anfedorov at gmail.com  Sun Jun  6 18:05:03 2010
From: anfedorov at gmail.com (Andrey Fedorov)
Date: Sun, 6 Jun 2010 12:05:03 -0400
Subject: [Python-ideas] @setattr(obj, [name])
In-Reply-To: <AANLkTikYSlC8kBqfa_7RWg_9y9IQYkXL1BZ0QLcW4FsO@mail.gmail.com>
References: <AANLkTikYSlC8kBqfa_7RWg_9y9IQYkXL1BZ0QLcW4FsO@mail.gmail.com>
Message-ID: <AANLkTilKn80Vjkl2WNecN6vw73vuz7yvk4k3RqwJ_XqS@mail.gmail.com>

George Sakkis wrote:

> Thoughts ?
>

I liked the idea, then realized that I was misunderstanding it deeply. I
would have expected

@setattr('key', "value")
def bar(): pass


make

bar.key == "value"


This was just my intuition from hearing "setattr decorator" and glancing
over your email. But deceiving name aside, I think this is a useful
decorator. I would just call it @method_of.

- Andrey
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100606/e1b04c0f/attachment.html>

From dstanek at dstanek.com  Sun Jun  6 18:16:44 2010
From: dstanek at dstanek.com (David Stanek)
Date: Sun, 6 Jun 2010 12:16:44 -0400
Subject: [Python-ideas] @setattr(obj, [name])
In-Reply-To: <AANLkTikYSlC8kBqfa_7RWg_9y9IQYkXL1BZ0QLcW4FsO@mail.gmail.com>
References: <AANLkTikYSlC8kBqfa_7RWg_9y9IQYkXL1BZ0QLcW4FsO@mail.gmail.com>
Message-ID: <AANLkTikRGAyin-yy-IFYlNdz83-cYcQrhlaEjjLD6pDg@mail.gmail.com>

On Sun, Jun 6, 2010 at 8:30 AM, George Sakkis <george.sakkis at gmail.com> wrote:
>
> Thoughts ?
>

I'm not sure I understand why you would want to do this. I may just be
combative because I hate the over use of decorators.


-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek


From george.sakkis at gmail.com  Sun Jun  6 22:26:07 2010
From: george.sakkis at gmail.com (George Sakkis)
Date: Sun, 6 Jun 2010 22:26:07 +0200
Subject: [Python-ideas] @setattr(obj, [name])
In-Reply-To: <AANLkTilKn80Vjkl2WNecN6vw73vuz7yvk4k3RqwJ_XqS@mail.gmail.com>
References: <AANLkTikYSlC8kBqfa_7RWg_9y9IQYkXL1BZ0QLcW4FsO@mail.gmail.com>
	<AANLkTilKn80Vjkl2WNecN6vw73vuz7yvk4k3RqwJ_XqS@mail.gmail.com>
Message-ID: <AANLkTimtybi_8DsQ_PmTrSi8MUZrJcT9gyMZF2ab3RW4@mail.gmail.com>

On Sun, Jun 6, 2010 at 6:05 PM, Andrey Fedorov <anfedorov at gmail.com> wrote:

> George Sakkis?wrote:
>>
>> Thoughts ?
>
> I liked the idea, then realized that I was misunderstanding it deeply. I
> would have expected
>
> @setattr('key', "value")
> def bar(): pass
>
> make
>
> bar.key == "value"

Yeah that would be a useful decorator too; I'd actually use **kwargs
so that you could do multiple bindings at once with
@setattr(key1=value1, key2=value2, ...).

> This was just my intuition from hearing "setattr decorator" and glancing
> over your email. But deceiving?name aside, I think this is a useful
> decorator. I would just call it @method_of.

I agree that overloading setattr() would be a bad idea given the two
(at least) different interpretations. Still "method_of" is not quite
right either since it can also be used as a class decorator; moreover
the 'obj' argument does not have to be a class, it can be a plain
instance.

George


From george.sakkis at gmail.com  Sun Jun  6 22:43:22 2010
From: george.sakkis at gmail.com (George Sakkis)
Date: Sun, 6 Jun 2010 22:43:22 +0200
Subject: [Python-ideas] @setattr(obj, [name])
In-Reply-To: <AANLkTikRGAyin-yy-IFYlNdz83-cYcQrhlaEjjLD6pDg@mail.gmail.com>
References: <AANLkTikYSlC8kBqfa_7RWg_9y9IQYkXL1BZ0QLcW4FsO@mail.gmail.com>
	<AANLkTikRGAyin-yy-IFYlNdz83-cYcQrhlaEjjLD6pDg@mail.gmail.com>
Message-ID: <AANLkTin2F8Zut-p9nUNo9RFdXgd23Rc-IMPmVnLAiz-u@mail.gmail.com>

On Sun, Jun 6, 2010 at 6:16 PM, David Stanek <dstanek at dstanek.com> wrote:
> On Sun, Jun 6, 2010 at 8:30 AM, George Sakkis <george.sakkis at gmail.com> wrote:
>>
>> Thoughts ?
>>
>
> I'm not sure I understand why you would want to do this. I may just be
> combative because I hate the over use of decorators.

It depends on what you mean by "this"; binding a function (or class)
to an object as an attribute, or using a decorator to achieve it ? I
find a decorator quite elegant in this case, although the use case
itself is admittedly much less common in Python than, say, Javascript
(with ``obj.property = function(x,y) {...}`` expressions everywhere).

George


From lie.1296 at gmail.com  Mon Jun  7 00:21:09 2010
From: lie.1296 at gmail.com (Lie Ryan)
Date: Mon, 07 Jun 2010 08:21:09 +1000
Subject: [Python-ideas] @setattr(obj, [name])
In-Reply-To: <AANLkTikYSlC8kBqfa_7RWg_9y9IQYkXL1BZ0QLcW4FsO@mail.gmail.com>
References: <AANLkTikYSlC8kBqfa_7RWg_9y9IQYkXL1BZ0QLcW4FsO@mail.gmail.com>
Message-ID: <huh74h$p7$1@dough.gmane.org>

On 06/06/10 22:30, George Sakkis wrote:
> Thoughts ?

Reminds me of C++:

class Foo {
    void bar(int baz);
}

void Foo::bar(int baz) {
    ...
}

that's not to say that the decorator isn't useful though. So +0.1.



From rrr at ronadam.com  Mon Jun  7 08:33:18 2010
From: rrr at ronadam.com (Ron Adam)
Date: Mon, 07 Jun 2010 01:33:18 -0500
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTikJ68fG0_om6Y6NYZ5SomnUrdkP_Vqpu8laeMPR@mail.gmail.com>
References: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com>
	<AANLkTikJ68fG0_om6Y6NYZ5SomnUrdkP_Vqpu8laeMPR@mail.gmail.com>
Message-ID: <4C0C92AE.5030105@ronadam.com>


On 06/01/2010 08:22 PM, Brett Cannon wrote:

> I can only see two scenarios that might be considered acceptable to
> address these issues.
>
> One is that when new modules are accepted into the stdlib they are
> flagged with a ExpermintalWarning so that people know that no
> backwards-compatibility promises have been made yet. That gets the
> module more exposure and gets python-dev real-world feedback to fix
> issues before the module calcifies into a strong
> backwards-compatibility. With that experience more proper decisions
> can be made as to how to change things (e.g. the logging module's
> default timestamp including microseconds which strptime cannot parse).

Would it be possible to have a future_lib that gets enabled with something 
like...

    from __future__ import future_lib

These *new* library modules and packages won't be visible by default. 
Maybe they stay there until the next major version or possible some set 
period of time.

Ron


From lie.1296 at gmail.com  Mon Jun  7 12:23:13 2010
From: lie.1296 at gmail.com (Lie Ryan)
Date: Mon, 07 Jun 2010 20:23:13 +1000
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <4C0C92AE.5030105@ronadam.com>
References: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com>	<AANLkTikJ68fG0_om6Y6NYZ5SomnUrdkP_Vqpu8laeMPR@mail.gmail.com>
	<4C0C92AE.5030105@ronadam.com>
Message-ID: <huihej$e44$1@dough.gmane.org>

On 06/07/10 16:33, Ron Adam wrote:
> 
> On 06/01/2010 08:22 PM, Brett Cannon wrote:
> 
>> I can only see two scenarios that might be considered acceptable to
>> address these issues.
>>
>> One is that when new modules are accepted into the stdlib they are
>> flagged with a ExpermintalWarning so that people know that no
>> backwards-compatibility promises have been made yet. That gets the
>> module more exposure and gets python-dev real-world feedback to fix
>> issues before the module calcifies into a strong
>> backwards-compatibility. With that experience more proper decisions
>> can be made as to how to change things (e.g. the logging module's
>> default timestamp including microseconds which strptime cannot parse).
> 
> Would it be possible to have a future_lib that gets enabled with
> something like...
> 
>    from __future__ import future_lib
> 
> These *new* library modules and packages won't be visible by default.
> Maybe they stay there until the next major version or possible some set
> period of time.

reading that gives me a chuckle. Perhaps that would be a vote to change
the name 'future'.



From george.sakkis at gmail.com  Mon Jun  7 12:51:51 2010
From: george.sakkis at gmail.com (George Sakkis)
Date: Mon, 7 Jun 2010 12:51:51 +0200
Subject: [Python-ideas] Callable properties
Message-ID: <AANLkTiknIr4bhNlXcx2KdcrRXKE9OVUpRGaBXJxNm5_d@mail.gmail.com>

I'm wondering if there is any downside in making properties callable:

class callableproperty(property):
    def __call__(self, obj):
        return self.fget(obj)

class Foo(object):
    @property
    def bar(self):
        return self

    @callableproperty
    def baz(self):
        return self


>>> foo = Foo()
>>> foo.baz is Foo.baz(foo)
True
>>> foo.bar is Foo.bar(foo)
...
TypeError: 'property' object is not callable


As for the motivation, having callable properties would make it easier
to stack them with other decorators that typically expect callables.
Am I missing something ?

George


From masklinn at masklinn.net  Mon Jun  7 13:06:43 2010
From: masklinn at masklinn.net (Masklinn)
Date: Mon, 7 Jun 2010 13:06:43 +0200
Subject: [Python-ideas] Callable properties
In-Reply-To: <AANLkTiknIr4bhNlXcx2KdcrRXKE9OVUpRGaBXJxNm5_d@mail.gmail.com>
References: <AANLkTiknIr4bhNlXcx2KdcrRXKE9OVUpRGaBXJxNm5_d@mail.gmail.com>
Message-ID: <6DA25C56-56E8-4574-9E05-EC9C221B367D@masklinn.net>

On 2010-06-07, at 12:51 , George Sakkis wrote:
> 
> I'm wondering if there is any downside in making properties callable:
It already exists, it's called a method.

Due to the way calling works in Python (you get a callable object and
 you apply the `()` operator to it) I don't think it's possible to
discriminate based on the context to perform the same operation
whether or not the value is called.

Your best bet would probably be to wrap the output of the property
in a subtype of itself (dynamically created subtype) able to return
self on call. Or you just create a lambda wrapping the property call.



From fuzzyman at voidspace.org.uk  Mon Jun  7 13:34:06 2010
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Mon, 7 Jun 2010 12:34:06 +0100
Subject: [Python-ideas] Callable properties
In-Reply-To: <6DA25C56-56E8-4574-9E05-EC9C221B367D@masklinn.net>
References: <AANLkTiknIr4bhNlXcx2KdcrRXKE9OVUpRGaBXJxNm5_d@mail.gmail.com>
	<6DA25C56-56E8-4574-9E05-EC9C221B367D@masklinn.net>
Message-ID: <AANLkTin7gB0e_VAcvwV93wlaaQYK3v1jzuCPer3DhpVe@mail.gmail.com>

On 7 June 2010 12:06, Masklinn <masklinn at masklinn.net> wrote:

> On 2010-06-07, at 12:51 , George Sakkis wrote:
> >
> > I'm wondering if there is any downside in making properties callable:
> It already exists, it's called a method.
>
> Due to the way calling works in Python (you get a callable object and
>  you apply the `()` operator to it) I don't think it's possible to
> discriminate based on the context to perform the same operation
> whether or not the value is called.
>
> Your best bet would probably be to wrap the output of the property
> in a subtype of itself (dynamically created subtype) able to return
> self on call. Or you just create a lambda wrapping the property call.
>

I think you misunderstood, he was suggesting making the property descriptor
instances callable.

Not a bad idea, but as a change to a builtin it would be covered by the
language moratorium. Easy to do in a subclass of property though.

At the moment you do the following, which is a bit ugly:

>>> class Foo(object):
...  @property
...  def foo(self):
...   return 'foo'
...
>>> f = Foo()
>>> Foo.foo.__get__(f)
'foo'

The reason for wanting to do this are the same reasons as you would call an
unbound method.

Michael


>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> http://mail.python.org/mailman/listinfo/python-ideas
>



-- 
http://www.voidspace.org.uk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/232f0bc6/attachment.html>

From fuzzyman at voidspace.org.uk  Mon Jun  7 13:46:19 2010
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Mon, 7 Jun 2010 12:46:19 +0100
Subject: [Python-ideas] Callable properties
In-Reply-To: <AANLkTiknIr4bhNlXcx2KdcrRXKE9OVUpRGaBXJxNm5_d@mail.gmail.com>
References: <AANLkTiknIr4bhNlXcx2KdcrRXKE9OVUpRGaBXJxNm5_d@mail.gmail.com>
Message-ID: <AANLkTim2pDcKSOuve2fIAymACgHx33eOAXkBAz76fK70@mail.gmail.com>

On 7 June 2010 11:51, George Sakkis <george.sakkis at gmail.com> wrote:

> I'm wondering if there is any downside in making properties callable:
>
> class callableproperty(property):
>    def __call__(self, obj):
>        return self.fget(obj)
>
> class Foo(object):
>    @property
>    def bar(self):
>        return self
>
>    @callableproperty
>    def baz(self):
>        return self
>
>
> >>> foo = Foo()
> >>> foo.baz is Foo.baz(foo)
> True
> >>> foo.bar is Foo.bar(foo)
> ...
> TypeError: 'property' object is not callable
>
>
> As for the motivation, having callable properties would make it easier
> to stack them with other decorators that typically expect callables.
> Am I missing something ?
>

Not sure it would specifically help with stacking decorators on properties
though. If you get them in the wrong order then you would not end up with a
property descriptor in the class dict but with an arbitrary callable (or
function which would be wrapped as a method) that no longer behaves as a
property.

Michael


>
> George
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> http://mail.python.org/mailman/listinfo/python-ideas
>



-- 
http://www.voidspace.org.uk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/0e454778/attachment.html>

From fdrake at acm.org  Mon Jun  7 15:16:28 2010
From: fdrake at acm.org (Fred Drake)
Date: Mon, 7 Jun 2010 09:16:28 -0400
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <4C0C92AE.5030105@ronadam.com>
References: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com> 
	<AANLkTikJ68fG0_om6Y6NYZ5SomnUrdkP_Vqpu8laeMPR@mail.gmail.com> 
	<4C0C92AE.5030105@ronadam.com>
Message-ID: <AANLkTinb0zeldqqPqNzj1DwkSJMxa6PjoZLJhtIGNVXy@mail.gmail.com>

On Mon, Jun 7, 2010 at 2:33 AM, Ron Adam <rrr at ronadam.com> wrote:
> Would it be possible to have a future_lib that gets enabled with something
> like...
>
> ? from __future__ import future_lib

This doesn't seem workable, since __future__ imports have local
effects, but the side effect here is really about the global module
space.  How would you expect this to work if the "old" version has
already been imported?


  -Fred

-- 
Fred L. Drake, Jr.    <fdrake at gmail.com>
"Chaos is the score upon which reality is written." --Henry Miller


From fuzzyman at voidspace.org.uk  Mon Jun  7 15:40:36 2010
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Mon, 7 Jun 2010 14:40:36 +0100
Subject: [Python-ideas] Callable properties
In-Reply-To: <AANLkTin7gB0e_VAcvwV93wlaaQYK3v1jzuCPer3DhpVe@mail.gmail.com>
References: <AANLkTiknIr4bhNlXcx2KdcrRXKE9OVUpRGaBXJxNm5_d@mail.gmail.com>
	<6DA25C56-56E8-4574-9E05-EC9C221B367D@masklinn.net>
	<AANLkTin7gB0e_VAcvwV93wlaaQYK3v1jzuCPer3DhpVe@mail.gmail.com>
Message-ID: <AANLkTil3wXz7ZcSNi97ezdGc1lHZ4GbEl80PF6m34cq3@mail.gmail.com>

On 7 June 2010 12:34, Michael Foord <fuzzyman at voidspace.org.uk> wrote:

>
> [snip...]
> I think you misunderstood, he was suggesting making the property descriptor
> instances callable.
>
> Not a bad idea, but as a change to a builtin it would be covered by the
> language moratorium. Easy to do in a subclass of property though.
>
> At the moment you do the following, which is a bit ugly:
>
> >>> class Foo(object):
> ...  @property
> ...  def foo(self):
> ...   return 'foo'
> ...
> >>> f = Foo()
> >>> Foo.foo.__get__(f)
> 'foo'
>
> The reason for wanting to do this are the same reasons as you would call an
> unbound method.
>
>
Or this which is still slightly ugly but at least gets rid of the magic
method call:

>>> class Foo(object):
...  @property
...  def foo(self):
...   return 'foo'
...
>>> f = Foo()
>>> Foo.foo.fget(f)
'foo'


Michael



> Michael
>
>
>>
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> http://mail.python.org/mailman/listinfo/python-ideas
>>
>
>
>
> --
> http://www.voidspace.org.uk
>
>
>


-- 
http://www.voidspace.org.uk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/5b9d816f/attachment.html>

From dangyogi at gmail.com  Mon Jun  7 16:37:25 2010
From: dangyogi at gmail.com (Bruce Frederiksen)
Date: Mon, 7 Jun 2010 10:37:25 -0400
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <4C0C92AE.5030105@ronadam.com>
References: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com>
	<AANLkTikJ68fG0_om6Y6NYZ5SomnUrdkP_Vqpu8laeMPR@mail.gmail.com>
	<4C0C92AE.5030105@ronadam.com>
Message-ID: <AANLkTim1W9EMv3ZJqBt3I4GsSxq-VA1xyMf3lYTs302X@mail.gmail.com>

Or perhaps:

  from experimental import new_module

This is kind of a guarantee that the interface will change; since at some
point, if new_module is "calcified", this will have to be changed to just:

  import new_module

For experimental language features, maybe:

  from __experimental__ import new_feature

This makes it clear that new_feature may change (perhaps even not be
adapted?), vs the from __future__ semantics.

Is it too complicated to try to differentiate between the decision of
whether some capability will be provided or not, vs ironing out the API for
that capability?

For example,

  from experimental import new_capability

means that there is no commitment for new_capability at all -- it may simply
be dropped entirely.  The danger of using this is that new_capability may
simply disappear completely with no replacement.

While,

  from proposed import new_capability

represents a commitment that new_capability will be provided at some point,
but the API will likely change.  Here the danger of using it is that you
will likely have to change your program to conform to a new API.

A capability might start as "experimental", and if the value of it is
demonstrated, move to "proposed" to work out the details before
mainstreaming it.

-Bruce

On Mon, Jun 7, 2010 at 2:33 AM, Ron Adam <rrr at ronadam.com> wrote:

>
> On 06/01/2010 08:22 PM, Brett Cannon wrote:
>
>  I can only see two scenarios that might be considered acceptable to
>> address these issues.
>>
>> One is that when new modules are accepted into the stdlib they are
>> flagged with a ExpermintalWarning so that people know that no
>> backwards-compatibility promises have been made yet. That gets the
>> module more exposure and gets python-dev real-world feedback to fix
>> issues before the module calcifies into a strong
>> backwards-compatibility. With that experience more proper decisions
>> can be made as to how to change things (e.g. the logging module's
>> default timestamp including microseconds which strptime cannot parse).
>>
>
> Would it be possible to have a future_lib that gets enabled with something
> like...
>
>   from __future__ import future_lib
>
> These *new* library modules and packages won't be visible by default. Maybe
> they stay there until the next major version or possible some set period of
> time.
>
> Ron
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> http://mail.python.org/mailman/listinfo/python-ideas
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/9a66b201/attachment.html>

From guido at python.org  Mon Jun  7 17:33:20 2010
From: guido at python.org (Guido van Rossum)
Date: Mon, 7 Jun 2010 08:33:20 -0700
Subject: [Python-ideas] Callable properties
In-Reply-To: <AANLkTil3wXz7ZcSNi97ezdGc1lHZ4GbEl80PF6m34cq3@mail.gmail.com>
References: <AANLkTiknIr4bhNlXcx2KdcrRXKE9OVUpRGaBXJxNm5_d@mail.gmail.com> 
	<6DA25C56-56E8-4574-9E05-EC9C221B367D@masklinn.net>
	<AANLkTin7gB0e_VAcvwV93wlaaQYK3v1jzuCPer3DhpVe@mail.gmail.com> 
	<AANLkTil3wXz7ZcSNi97ezdGc1lHZ4GbEl80PF6m34cq3@mail.gmail.com>
Message-ID: <AANLkTik_MVQh6H5ovt7PmQilrEldOqlZL9HX0eD580Zh@mail.gmail.com>

Not sure I follow all of this, but in general overloading of __call__ should
be used rarely -- it's too easy for code to become unreadable otherwise.

--Guido

On Mon, Jun 7, 2010 at 6:40 AM, Michael Foord <fuzzyman at voidspace.org.uk>wrote:

>
>
> On 7 June 2010 12:34, Michael Foord <fuzzyman at voidspace.org.uk> wrote:
>
>>
>> [snip...]
>> I think you misunderstood, he was suggesting making the property
>> descriptor instances callable.
>>
>> Not a bad idea, but as a change to a builtin it would be covered by the
>> language moratorium. Easy to do in a subclass of property though.
>>
>> At the moment you do the following, which is a bit ugly:
>>
>> >>> class Foo(object):
>> ...  @property
>> ...  def foo(self):
>> ...   return 'foo'
>> ...
>> >>> f = Foo()
>> >>> Foo.foo.__get__(f)
>> 'foo'
>>
>> The reason for wanting to do this are the same reasons as you would call
>> an unbound method.
>>
>>
> Or this which is still slightly ugly but at least gets rid of the magic
> method call:
>
>
> >>> class Foo(object):
> ...  @property
> ...  def foo(self):
> ...   return 'foo'
> ...
> >>> f = Foo()
> >>> Foo.foo.fget(f)
> 'foo'
>
>
> Michael
>
>
>
>> Michael
>>
>>
>>>
>>> _______________________________________________
>>> Python-ideas mailing list
>>> Python-ideas at python.org
>>> http://mail.python.org/mailman/listinfo/python-ideas
>>>
>>
>>
>>
>> --
>> http://www.voidspace.org.uk
>>
>>
>>
>
>
> --
> http://www.voidspace.org.uk
>
>
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> http://mail.python.org/mailman/listinfo/python-ideas
>
>


-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/dc12c44f/attachment.html>

From ianb at colorstudy.com  Mon Jun  7 17:35:46 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Mon, 7 Jun 2010 10:35:46 -0500
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <4C0C92AE.5030105@ronadam.com>
References: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com> 
	<AANLkTikJ68fG0_om6Y6NYZ5SomnUrdkP_Vqpu8laeMPR@mail.gmail.com> 
	<4C0C92AE.5030105@ronadam.com>
Message-ID: <AANLkTilUajJ_8X9sit4alK6nf8XNB9jcria8EE_WxalB@mail.gmail.com>

On Mon, Jun 7, 2010 at 1:33 AM, Ron Adam <rrr at ronadam.com> wrote:

>  On 06/01/2010 08:22 PM, Brett Cannon wrote:
>
>> I can only see two scenarios that might be considered acceptable to
>> address these issues.
>>
>> One is that when new modules are accepted into the stdlib they are
>> flagged with a ExpermintalWarning so that people know that no
>> backwards-compatibility promises have been made yet. That gets the
>> module more exposure and gets python-dev real-world feedback to fix
>> issues before the module calcifies into a strong
>> backwards-compatibility. With that experience more proper decisions
>> can be made as to how to change things (e.g. the logging module's
>> default timestamp including microseconds which strptime cannot parse).
>>
>
> Would it be possible to have a future_lib that gets enabled with something
> like...
>
>   from __future__ import future_lib
>
> These *new* library modules and packages won't be visible by default. Maybe
> they stay there until the next major version or possible some set period of
> time.
>

The only place where any of this seems even slightly useful would be a
library closely associated with Python itself, e.g., a new ast module or
something with imports.  Everything else should be developed as an external
installable library.  At least things that are importable (str.partition for
instance isn't something you import).

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/120635f5/attachment.html>

From python at mrabarnett.plus.com  Mon Jun  7 17:41:17 2010
From: python at mrabarnett.plus.com (MRAB)
Date: Mon, 07 Jun 2010 16:41:17 +0100
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTim1W9EMv3ZJqBt3I4GsSxq-VA1xyMf3lYTs302X@mail.gmail.com>
References: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com>	<AANLkTikJ68fG0_om6Y6NYZ5SomnUrdkP_Vqpu8laeMPR@mail.gmail.com>	<4C0C92AE.5030105@ronadam.com>
	<AANLkTim1W9EMv3ZJqBt3I4GsSxq-VA1xyMf3lYTs302X@mail.gmail.com>
Message-ID: <4C0D131D.2060302@mrabarnett.plus.com>

Bruce Frederiksen wrote:
> Or perhaps:
> 
>   from experimental import new_module
> 
> This is kind of a guarantee that the interface will change; since at 
> some point, if new_module is "calcified", this will have to be changed 
> to just:
> 
>   import new_module
> 
> For experimental language features, maybe:
> 
>   from __experimental__ import new_feature
> 
> This makes it clear that new_feature may change (perhaps even not be 
> adapted?), vs the from __future__ semantics.
> 
> Is it too complicated to try to differentiate between the decision of 
> whether some capability will be provided or not, vs ironing out the API 
> for that capability?
> 
> For example,
> 
>   from experimental import new_capability
> 
> means that there is no commitment for new_capability at all -- it may 
> simply be dropped entirely.  The danger of using this is that 
> new_capability may simply disappear completely with no replacement.
> 
> While,
> 
>   from proposed import new_capability
> 
> represents a commitment that new_capability will be provided at some 
> point, but the API will likely change.  Here the danger of using it is 
> that you will likely have to change your program to conform to a new API.
> 
A proposal isn't a commitment. It can still be rejected.

> A capability might start as "experimental", and if the value of it is 
> demonstrated, move to "proposed" to work out the details before 
> mainstreaming it.
> 


From ianb at colorstudy.com  Mon Jun  7 17:46:26 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Mon, 7 Jun 2010 10:46:26 -0500
Subject: [Python-ideas] Callable properties
In-Reply-To: <AANLkTiknIr4bhNlXcx2KdcrRXKE9OVUpRGaBXJxNm5_d@mail.gmail.com>
References: <AANLkTiknIr4bhNlXcx2KdcrRXKE9OVUpRGaBXJxNm5_d@mail.gmail.com>
Message-ID: <AANLkTik_DHtaXbVvawo7y-Y7qO2GrAfrWr3obkiFo9OB@mail.gmail.com>

On Mon, Jun 7, 2010 at 5:51 AM, George Sakkis <george.sakkis at gmail.com>wrote:

> I'm wondering if there is any downside in making properties callable:
>
> class callableproperty(property):
>    def __call__(self, obj):
>        return self.fget(obj)
>
> class Foo(object):
>    @property
>    def bar(self):
>        return self
>
>    @callableproperty
>    def baz(self):
>        return self
>
>
> >>> foo = Foo()
> >>> foo.baz is Foo.baz(foo)
> True
> >>> foo.bar is Foo.bar(foo)
> ...
> TypeError: 'property' object is not callable
>
>
> As for the motivation, having callable properties would make it easier
> to stack them with other decorators that typically expect callables.
> Am I missing something ?
>

I find stacking descriptors to be easier than it might at first appear (and
making decorators descriptors is also advantageous).  Treating properties
like Yet Another Descriptor helps here.  As it is you could use
Foo.bar.__get__(foo), or generally Foo.bar.__get__ as a callable.

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/f8bf7e8f/attachment.html>

From ianb at colorstudy.com  Mon Jun  7 18:35:16 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Mon, 7 Jun 2010 11:35:16 -0500
Subject: [Python-ideas] Moving development out of the standard library
Message-ID: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>

OK... after a bit of off-list discussion I realize what I am really
concerned about with respect to the standard library wasn't well expressed.
So here's my real assertion:

  There is no reason any new library or functionality should be tied to a
Python release.

Outside of a few exceptions (like ast or importlib) functionality in the
standard library seldom relies on anything in a particular Python release;
e.g., code might use conditional expressions, but it never *has* to use
conditional expressions.  The standard library that most people know and
love is really the least common denominator of Python's that person has to
handle; for someone writing an open source library that's probably 2.5, for
someone using Zope 2 that's traditionally been 2.4, and if you have a
controlled environment (e.g., internal development) maybe you can do 2.6.

I think there is a general consensus that functionality should not be tied
to a Python release, but the results are ad hoc.  That is, truly useful
libraries that are added to the stdlib are backported, or more often were
originally maintained as a library with backward compatibility before being
integrated into the standard library.  I think we should have a more
formalized process about how this functionality is maintained, including a
process that considers the years of ongoing maintenance and improvement that
should happen on these libraries.  (Most specifically without serious
thought about this development process I am pessimistic about an orderly or
positive inclusion of distutils2 in packaging workflows.)

Another alternative is to simply not make improvements to the standard
library beyond a very well-defined set of appropriate functionality.  This
would be much closer to the status quo.  Defining what categories would be
"appropriate" would be contentious, I am sure, but would sharply focus
future discussions.

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/9dd6b644/attachment.html>

From solipsis at pitrou.net  Mon Jun  7 20:14:34 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 7 Jun 2010 20:14:34 +0200
Subject: [Python-ideas] Moving development out of the standard library
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
Message-ID: <20100607201434.66d9bdbd@pitrou.net>

On Mon, 7 Jun 2010 11:35:16 -0500
Ian Bicking <ianb at colorstudy.com> wrote:
> 
> I think there is a general consensus that functionality should not be tied
> to a Python release, but the results are ad hoc.

I disagree with this. Tying new functionality to a Python release
vastly simplifies dependency management (instead of having to track the
versions of N external libraries, sometimes with inter-dependencies).

> (Most specifically without serious
> thought about this development process I am pessimistic about an
> orderly or positive inclusion of distutils2 in packaging workflows.)

Without any discussion of specifics, I find it hard to understand what
you are concerned about.





From ianb at colorstudy.com  Mon Jun  7 20:33:48 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Mon, 7 Jun 2010 13:33:48 -0500
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <20100607201434.66d9bdbd@pitrou.net>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<20100607201434.66d9bdbd@pitrou.net>
Message-ID: <AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>

On Mon, Jun 7, 2010 at 1:14 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:

> On Mon, 7 Jun 2010 11:35:16 -0500
> Ian Bicking <ianb at colorstudy.com> wrote:
> >
> > I think there is a general consensus that functionality should not be
> tied
> > to a Python release, but the results are ad hoc.
>
> I disagree with this. Tying new functionality to a Python release
> vastly simplifies dependency management (instead of having to track the
> versions of N external libraries, sometimes with inter-dependencies).
>

I say there is consensus because as far as I know anything substantial has a
maintained version outside the standard library; argparse is implicitly,
unittest is unittest2, ElementTree always has maintained a separate
existence, simplejson implicitly.

> (Most specifically without serious
> > thought about this development process I am pessimistic about an
> > orderly or positive inclusion of distutils2 in packaging workflows.)
>
> Without any discussion of specifics, I find it hard to understand what
> you are concerned about.
>

1. How will distutils2 updates be made available between Python releases?
2. How will distutils2 features be made available in older Python releases?
3. How will old standard library releases of distutils2 be managed?  E.g.,
if pip starts using distutils2, at the time distutils2 ends up in the
standard library there is a version of distutils2 I cannot simply reject as
incompatible, meaning I can't make use of any new features or bug fixes.

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/913744d1/attachment.html>

From brett at python.org  Mon Jun  7 20:42:34 2010
From: brett at python.org (Brett Cannon)
Date: Mon, 7 Jun 2010 11:42:34 -0700
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
Message-ID: <AANLkTim9KX-Jfa1kE3Chzm7vZRl9XjL28TTUkjLmtcBG@mail.gmail.com>

On Mon, Jun 7, 2010 at 09:35, Ian Bicking <ianb at colorstudy.com> wrote:
[SNIP]
> Another alternative is to simply not make improvements to the standard
> library beyond a very well-defined set of appropriate functionality.? This
> would be much closer to the status quo.? Defining what categories would be
> "appropriate" would be contentious, I am sure, but would sharply focus
> future discussions.

I personally would love to see this happen. Having a more clear focus
for the stdlib would be a good thing in my opinion since as of right
now it's just what the group thinks it is at that point.


From ianb at colorstudy.com  Mon Jun  7 20:50:49 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Mon, 7 Jun 2010 13:50:49 -0500
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTim9KX-Jfa1kE3Chzm7vZRl9XjL28TTUkjLmtcBG@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<AANLkTim9KX-Jfa1kE3Chzm7vZRl9XjL28TTUkjLmtcBG@mail.gmail.com>
Message-ID: <AANLkTikwuu0vY-lGkods597dguRP4LM5hgvRTnurETC8@mail.gmail.com>

On Mon, Jun 7, 2010 at 1:42 PM, Brett Cannon <brett at python.org> wrote:

> On Mon, Jun 7, 2010 at 09:35, Ian Bicking <ianb at colorstudy.com> wrote:
> [SNIP]
> > Another alternative is to simply not make improvements to the standard
> > library beyond a very well-defined set of appropriate functionality.
> This
> > would be much closer to the status quo.  Defining what categories would
> be
> > "appropriate" would be contentious, I am sure, but would sharply focus
> > future discussions.
>
> I personally would love to see this happen. Having a more clear focus
> for the stdlib would be a good thing in my opinion since as of right
> now it's just what the group thinks it is at that point.
>

Indeed, each person projects different ideas and motivations onto the
standard library and I don't see a great deal of shared understanding about
what it is.

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/f32ed6e6/attachment.html>

From solipsis at pitrou.net  Mon Jun  7 20:52:07 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 7 Jun 2010 20:52:07 +0200
Subject: [Python-ideas] Moving development out of the standard library
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
Message-ID: <20100607205207.5532b939@pitrou.net>

On Mon, 7 Jun 2010 13:33:48 -0500
Ian Bicking <ianb at colorstudy.com> wrote:
> On Mon, Jun 7, 2010 at 1:14 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> 
> > On Mon, 7 Jun 2010 11:35:16 -0500
> > Ian Bicking <ianb at colorstudy.com> wrote:
> > >
> > > I think there is a general consensus that functionality should not be
> > tied
> > > to a Python release, but the results are ad hoc.
> >
> > I disagree with this. Tying new functionality to a Python release
> > vastly simplifies dependency management (instead of having to track the
> > versions of N external libraries, sometimes with inter-dependencies).
> >
> 
> I say there is consensus because as far as I know anything substantial has a
> maintained version outside the standard library; argparse is implicitly,
> unittest is unittest2, ElementTree always has maintained a separate
> existence, simplejson implicitly.

"Anything substantial" is more than exagerated. The modules you are
mentioning are exceptions, two of which may even be temporary (argparse
and unittest2). Most sdtlib modules don't have external releases, and
many of them are still "substantial".

> 1. How will distutils2 updates be made available between Python releases?
> 2. How will distutils2 features be made available in older Python releases?

Why are you expecting any of these to happen? I don't know what Tarek
intends to do in that respect, but he certainly doesn't have any moral
obligation to do external releases.

Regards

Antoine.




From ziade.tarek at gmail.com  Mon Jun  7 21:04:13 2010
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Mon, 7 Jun 2010 21:04:13 +0200
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTikwuu0vY-lGkods597dguRP4LM5hgvRTnurETC8@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<AANLkTim9KX-Jfa1kE3Chzm7vZRl9XjL28TTUkjLmtcBG@mail.gmail.com>
	<AANLkTikwuu0vY-lGkods597dguRP4LM5hgvRTnurETC8@mail.gmail.com>
Message-ID: <AANLkTiktg_5HW_jmVDsJP_e88lHGq7IkocAzURRKHdjm@mail.gmail.com>

On Mon, Jun 7, 2010 at 8:50 PM, Ian Bicking <ianb at colorstudy.com> wrote:
> On Mon, Jun 7, 2010 at 1:42 PM, Brett Cannon <brett at python.org> wrote:
>>
>> On Mon, Jun 7, 2010 at 09:35, Ian Bicking <ianb at colorstudy.com> wrote:
>> [SNIP]
>> > Another alternative is to simply not make improvements to the standard
>> > library beyond a very well-defined set of appropriate functionality.
>> > This
>> > would be much closer to the status quo.? Defining what categories would
>> > be
>> > "appropriate" would be contentious, I am sure, but would sharply focus
>> > future discussions.
>>
>> I personally would love to see this happen. Having a more clear focus
>> for the stdlib would be a good thing in my opinion since as of right
>> now it's just what the group thinks it is at that point.
>
> Indeed, each person projects different ideas and motivations onto the
> standard library and I don't see a great deal of shared understanding about
> what it is.

There's one thing that is clear though: Distutils is in the standard
library, and it would
be a non sense not to have Distutils2 included to replace it.

While I understand your motivations not to see Pip included in the
standard library,
I will strongly object the exclusion Distutils2 from the standard library.

I have agreed to temporarily develop Distutils2 outside the stdlib
with the sole condition
that it will be included back as soon as it is ready, because we badly
need such a system
in a vanilla Python.

Distutils2 is the place where we are implementing the PEP that where
accepted lately,
and its inclusion to the standard library will provide a working
packaging system
for Python (that is *batteries included*) and a blessed playground for
third party
packaging tools.

Regards
Tarek

-- 
Tarek Ziad? | http://ziade.org


From george.sakkis at gmail.com  Mon Jun  7 21:05:00 2010
From: george.sakkis at gmail.com (George Sakkis)
Date: Mon, 7 Jun 2010 21:05:00 +0200
Subject: [Python-ideas] Callable properties
In-Reply-To: <AANLkTik_DHtaXbVvawo7y-Y7qO2GrAfrWr3obkiFo9OB@mail.gmail.com>
References: <AANLkTiknIr4bhNlXcx2KdcrRXKE9OVUpRGaBXJxNm5_d@mail.gmail.com>
	<AANLkTik_DHtaXbVvawo7y-Y7qO2GrAfrWr3obkiFo9OB@mail.gmail.com>
Message-ID: <AANLkTilznQcQ-wKKTm0BIh7dhEtE_5wPb9-E7gVXjrO4@mail.gmail.com>

On Mon, Jun 7, 2010 at 5:46 PM, Ian Bicking <ianb at colorstudy.com> wrote:
> On Mon, Jun 7, 2010 at 5:51 AM, George Sakkis <george.sakkis at gmail.com>
> wrote:
>>
>> I'm wondering if there is any downside in making properties callable:
>>
>> class callableproperty(property):
>> ? ?def __call__(self, obj):
>> ? ? ? ?return self.fget(obj)
>>
>> class Foo(object):
>> ? ?@property
>> ? ?def bar(self):
>> ? ? ? ?return self
>>
>> ? ?@callableproperty
>> ? ?def baz(self):
>> ? ? ? ?return self
>>
>>
>> >>> foo = Foo()
>> >>> foo.baz is Foo.baz(foo)
>> True
>> >>> foo.bar is Foo.bar(foo)
>> ...
>> TypeError: 'property' object is not callable
>>
>>
>> As for the motivation, having callable properties would make it easier
>> to stack them with other decorators that typically expect callables.
>> Am I missing something ?
>
> I find stacking descriptors to be easier than it might at first appear (and
> making decorators descriptors is also advantageous).? Treating properties
> like Yet Another Descriptor helps here.? As it is you could use
> Foo.bar.__get__(foo), or generally Foo.bar.__get__ as a callable.

The problem though is that most existing decorators expect their input
to be a function or a callable at best, not a descriptor; they have to
do something like ``func = getattr(func, '__get__', func)`` to cover
all cases. But regardless, I'm not proposing to make all descriptors
callable (it probably doesn't make sense in general), just properties.

George


From ianb at colorstudy.com  Mon Jun  7 21:20:41 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Mon, 7 Jun 2010 14:20:41 -0500
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <20100607205207.5532b939@pitrou.net>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com> 
	<20100607205207.5532b939@pitrou.net>
Message-ID: <AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>

On Mon, Jun 7, 2010 at 1:52 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:

>  > I say there is consensus because as far as I know anything substantial
> has a
> > maintained version outside the standard library; argparse is implicitly,
> > unittest is unittest2, ElementTree always has maintained a separate
> > existence, simplejson implicitly.
>
>
> "Anything substantial" is more than exagerated. The modules you are
> mentioning are exceptions, two of which may even be temporary (argparse
> and unittest2). Most sdtlib modules don't have external releases, and
> many of them are still "substantial".
>

Most other modules are very old.  In cases where it hasn't happened, e.g.,
doctest in 2.4, I at least personally have had to backport that module on my
own.


> > 1. How will distutils2 updates be made available between Python releases?
> > 2. How will distutils2 features be made available in older Python
> releases?
>
> Why are you expecting any of these to happen? I don't know what Tarek
> intends to do in that respect, but he certainly doesn't have any moral
> obligation to do external releases.
>

distutils2 won't be in 2.7 at least, and any packaging system not available
for Python 2 would be irrelevant.

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/98b24bcf/attachment.html>

From ziade.tarek at gmail.com  Mon Jun  7 21:36:03 2010
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Mon, 7 Jun 2010 21:36:03 +0200
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
Message-ID: <AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>

On Mon, Jun 7, 2010 at 9:20 PM, Ian Bicking <ianb at colorstudy.com> wrote:
[..]
>> > 1. How will distutils2 updates be made available between Python
>> > releases?
>> > 2. How will distutils2 features be made available in older Python
>> > releases?
>>
>> Why are you expecting any of these to happen? I don't know what Tarek
>> intends to do in that respect, but he certainly doesn't have any moral
>> obligation to do external releases.
>
> distutils2 won't be in 2.7 at least, and any packaging system not available
> for Python 2 would be irrelevant.

Distutils2 will be provided for Python 2.4 to 3.x with frequent releases.

Then, once it's added in 3.2 (maybe 3.3 if not ready by then), its
release cycle will be driven by
Python's one, with backports released at the same pace for Python
versions that didn't have it in
the stdlib.

Regards
Tarek

-- 
Tarek Ziad? | http://ziade.org


From p.f.moore at gmail.com  Mon Jun  7 21:42:01 2010
From: p.f.moore at gmail.com (Paul Moore)
Date: Mon, 7 Jun 2010 20:42:01 +0100
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
Message-ID: <AANLkTiltc-SdQDZNUNom9sbNn45yPANmjIoiQdsI0U_S@mail.gmail.com>

On 7 June 2010 20:20, Ian Bicking <ianb at colorstudy.com> wrote:
> distutils2 won't be in 2.7 at least, and any packaging system not available
> for Python 2 would be irrelevant.

I find it hard to interpret this in any way that doesn't pretty much
imply "Python 3 is irrelevant". While I disagree with that, I suspect
it isn't what you mean. So can you clarify?

Paul.


From ianb at colorstudy.com  Mon Jun  7 21:39:50 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Mon, 7 Jun 2010 14:39:50 -0500
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com> 
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com> 
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
Message-ID: <AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>

On Mon, Jun 7, 2010 at 2:36 PM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:

> On Mon, Jun 7, 2010 at 9:20 PM, Ian Bicking <ianb at colorstudy.com> wrote:
> [..]
> >> > 1. How will distutils2 updates be made available between Python
> >> > releases?
> >> > 2. How will distutils2 features be made available in older Python
> >> > releases?
> >>
> >> Why are you expecting any of these to happen? I don't know what Tarek
> >> intends to do in that respect, but he certainly doesn't have any moral
> >> obligation to do external releases.
> >
> > distutils2 won't be in 2.7 at least, and any packaging system not
> available
> > for Python 2 would be irrelevant.
>
> Distutils2 will be provided for Python 2.4 to 3.x with frequent releases.
>
> Then, once it's added in 3.2 (maybe 3.3 if not ready by then), its
> release cycle will be driven by
> Python's one, with backports released at the same pace for Python
> versions that didn't have it in
> the stdlib.
>

So let's say distutils2 gets version parity with Python, so distutils2 3.3
is released with Python 3.3.  Then distutils2 3.4 is released with Python
3.4, and is backported to all previous versions of Python... except for
Python 3.3, which will always have distutils2 3.3?


-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/9cd035e4/attachment.html>

From ianb at colorstudy.com  Mon Jun  7 21:49:38 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Mon, 7 Jun 2010 14:49:38 -0500
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTiltc-SdQDZNUNom9sbNn45yPANmjIoiQdsI0U_S@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com> 
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com> 
	<AANLkTiltc-SdQDZNUNom9sbNn45yPANmjIoiQdsI0U_S@mail.gmail.com>
Message-ID: <AANLkTimxhh5iVCZEeZRyvI2UwbcnFmNXBeT4JsJER3Eb@mail.gmail.com>

On Mon, Jun 7, 2010 at 2:42 PM, Paul Moore <p.f.moore at gmail.com> wrote:

> On 7 June 2010 20:20, Ian Bicking <ianb at colorstudy.com> wrote:
> > distutils2 won't be in 2.7 at least, and any packaging system not
> available
> > for Python 2 would be irrelevant.
>
> I find it hard to interpret this in any way that doesn't pretty much
> imply "Python 3 is irrelevant". While I disagree with that, I suspect
> it isn't what you mean. So can you clarify?
>

It's kind of a moot point as Tarek isn't planning to only support Python 3.
But developing packaging libraries for *only* Python 3 would mean that an
alternate ecosystem would have to continue to support Python 2, and that
alternate ecosystem would produce superior tools because that's where all
the users actually are.  For the next few years at least Python 3 needs to
ride on Python 2's coat tails if it's going to keep up.

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/1567ef42/attachment.html>

From solipsis at pitrou.net  Mon Jun  7 21:56:22 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 7 Jun 2010 21:56:22 +0200
Subject: [Python-ideas] Moving development out of the standard library
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
Message-ID: <20100607215622.56961a58@pitrou.net>

On Mon, 7 Jun 2010 14:20:41 -0500
Ian Bicking <ianb at colorstudy.com> wrote:
> On Mon, Jun 7, 2010 at 1:52 PM, Antoine Pitrou <solipsis-xNDA5Wrcr86sTnJN9+BGXg at public.gmane.org> wrote:
> 
> >  > I say there is consensus because as far as I know anything substantial
> > has a
> > > maintained version outside the standard library; argparse is implicitly,
> > > unittest is unittest2, ElementTree always has maintained a separate
> > > existence, simplejson implicitly.
> >
> >
> > "Anything substantial" is more than exagerated. The modules you are
> > mentioning are exceptions, two of which may even be temporary (argparse
> > and unittest2). Most sdtlib modules don't have external releases, and
> > many of them are still "substantial".
> >
> 
> Most other modules are very old.

Well, even if that's true (I haven't checked and I guess we wouldn't
agree on the meaning of "old"), so what?
I guess what I'm asking is: what is your line of reasoning?
You started with a contention that:

?There is no reason any new library or functionality should be tied to a
Python release?

and, in my humble opinion, you failed to demonstrate that. In
particular, you haven't replied to my argument that it
dramatically eases dependency management.

> distutils2 won't be in 2.7 at least, and any packaging system not available
> for Python 2 would be irrelevant.

That's your opinion and I guess some people would disagree. Besides,
decreeing that distutils2 be kept out of the stdlib won't make it
its code magically compatible with Python 2.x.

Regards

Antoine.




From ziade.tarek at gmail.com  Mon Jun  7 21:57:26 2010
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Mon, 7 Jun 2010 21:57:26 +0200
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
Message-ID: <AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>

On Mon, Jun 7, 2010 at 9:39 PM, Ian Bicking <ianb at colorstudy.com> wrote:
> On Mon, Jun 7, 2010 at 2:36 PM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
>>
>> On Mon, Jun 7, 2010 at 9:20 PM, Ian Bicking <ianb at colorstudy.com> wrote:
>> [..]
>> >> > 1. How will distutils2 updates be made available between Python
>> >> > releases?
>> >> > 2. How will distutils2 features be made available in older Python
>> >> > releases?
>> >>
>> >> Why are you expecting any of these to happen? I don't know what Tarek
>> >> intends to do in that respect, but he certainly doesn't have any moral
>> >> obligation to do external releases.
>> >
>> > distutils2 won't be in 2.7 at least, and any packaging system not
>> > available
>> > for Python 2 would be irrelevant.
>>
>> Distutils2 will be provided for Python 2.4 to 3.x with frequent releases.
>>
>> Then, once it's added in 3.2 (maybe 3.3 if not ready by then), its
>> release cycle will be driven by
>> Python's one, with backports released at the same pace for Python
>> versions that didn't have it in
>> the stdlib.
>
> So let's say distutils2 gets version parity with Python,

yes, in other word, it's part of the stdlib

> so distutils2 3.3
> is released with Python 3.3.? Then distutils2 3.4 is released with Python
> 3.4, and is backported to all previous versions of Python... except for
> Python 3.3, which will always have distutils2 3.3?

Distutils2 will behave exactly like other packages in the standard
library. That is:

- the development is going on in trunk
- bug fixes are backported to older python releases that have this
package present
- some feature or small refactoring are also backported when it makes sense

So Python 3.3.2 will have Distutils 3.3.2, but also os.path 3.3.2, and
shutil 3.3.2
Like Python 3.4 its own versions.

Now for older versions of Python, I will provide a backport at PyPI,
so people can use
it under Python 2.x. This backport will probably be made with the
trunk so the 2.x line has the latest
code. IOW the latest 2.7 release might be more advanced than the one
provided in 3.3 for example,
but I don't see this as a problem.

Tarek

-- 
Tarek Ziad? | http://ziade.org


From ianb at colorstudy.com  Mon Jun  7 22:00:17 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Mon, 7 Jun 2010 15:00:17 -0500
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com> 
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com> 
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com> 
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com> 
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
Message-ID: <AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>

On Mon, Jun 7, 2010 at 2:57 PM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:

> Now for older versions of Python, I will provide a backport at PyPI,
> so people can use
> it under Python 2.x. This backport will probably be made with the
> trunk so the 2.x line has the latest
> code. IOW the latest 2.7 release might be more advanced than the one
> provided in 3.3 for example,
> but I don't see this as a problem.
>

It means that, at least for pip, distutils2 3.3 would be effectively the
last version.  If there are important bugs we'll have to work around them.
If there are added features we'll have to ignore them.

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/eb2988b5/attachment.html>

From ianb at colorstudy.com  Mon Jun  7 22:09:36 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Mon, 7 Jun 2010 15:09:36 -0500
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <20100607215622.56961a58@pitrou.net>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com> 
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com> 
	<20100607215622.56961a58@pitrou.net>
Message-ID: <AANLkTim0wznVxRsgURSbPQPAcIHzYTjyyJ44xyXbPPl_@mail.gmail.com>

On Mon, Jun 7, 2010 at 2:56 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:

> On Mon, 7 Jun 2010 14:20:41 -0500
> Ian Bicking <ianb at colorstudy.com> wrote:
> > On Mon, Jun 7, 2010 at 1:52 PM, Antoine Pitrou <
> solipsis-xNDA5Wrcr86sTnJN9+BGXg at public.gmane.org<solipsis-xNDA5Wrcr86sTnJN9%2BBGXg at public.gmane.org>>
> wrote:
> >
> > >  > I say there is consensus because as far as I know anything
> substantial
> > > has a
> > > > maintained version outside the standard library; argparse is
> implicitly,
> > > > unittest is unittest2, ElementTree always has maintained a separate
> > > > existence, simplejson implicitly.
> > >
> > >
> > > "Anything substantial" is more than exagerated. The modules you are
> > > mentioning are exceptions, two of which may even be temporary (argparse
> > > and unittest2). Most sdtlib modules don't have external releases, and
> > > many of them are still "substantial".
> > >
> >
> > Most other modules are very old.
>
> Well, even if that's true (I haven't checked and I guess we wouldn't
> agree on the meaning of "old"), so what?
> I guess what I'm asking is: what is your line of reasoning?
> You started with a contention that:
>
> ?There is no reason any new library or functionality should be tied to a
> Python release?
>
> and, in my humble opinion, you failed to demonstrate that. In
> particular, you haven't replied to my argument that it
> dramatically eases dependency management.
>

It only eases dependency management in closed systems with a fixed Python
version.  If you support more than one Python version than you still have
dependency management, it is just tied to ad hoc workarounds when there are
compatibility problems, or ignoring new functionality.

The importance of "old" vs. "new" modules is that people tend to have a
lowest version of Python they support, as simply a hard stop.  This is
currently Python 2.5 for most people, 2.4 for some groups (and just a very
few stragglers with 2.3).  So long as you never use anything beyond what 2.5
provides then it's okay, which is most of the standard library.  I am not
aware of anything added since 2.5 that isn't backported or previously
available as a separate library (I'm sure there's *something*, just nothing
I can think of).

There is no clear policy about how backports are managed.  There's some
disfavor for backporting under the same name (PyXML being a primary source
of this disfavor), but implicitly argparse *will* be backported as such as
it is already installed under argparse and that name matches the stdlib
name.  It's unclear what should happen when you install a backport in a
version of Python that already has the module.  E.g., if distutils2 is
distributed as distutils2 it would be able to override the standard library
when installed unless there was code specifically to disallow it.

Also we have implicit dependency management and versioning already because a
few libraries in the standard library have .egg-info files installed.

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/fb6469f0/attachment.html>

From ncoghlan at gmail.com  Mon Jun  7 22:24:54 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 08 Jun 2010 06:24:54 +1000
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
Message-ID: <4C0D5596.7080606@gmail.com>

On 08/06/10 02:35, Ian Bicking wrote:
> OK... after a bit of off-list discussion I realize what I am really
> concerned about with respect to the standard library wasn't well
> expressed.  So here's my real assertion:
>
>    There is no reason any new library or functionality should be tied to
> a Python release.
>

Rather than rehash the point Antoine has already made regarding 
dependency management, I'll mention the other key benefit of standard 
library inclusion:

   Inclusion of a module or package in the standard library makes sense 
when the benefits of having "One Obvious Way To Do It" simplifies 
teaching of Python and development and maintenance of future Python code 
sufficiently to justify the slower rate of release associated with 
standard library inclusion.

I'm one of those that believes that the volume of all currently written 
Python code is a small subset of the Python code that will ever be 
written, hence it makes sense to "raise the bar" in improving the 
quality of the baseline toolset provided to developers. Additions like 
argparse and the new futures module in PEP 3148, as well as historical 
additions like itertools, collections, ElementTree, simplejson, etc all 
serve that purpose well.

However, I *also* like the pattern that has emerged of many standard 
library modules being kept backwards compatible with previous Python 
releases, and sometimes even being separately released on PyPI (or 
elsewhere). This approach allows the "one obvious way" to be extended 
back to earlier Python versions, since eventual standard library 
inclusion is a big point in a module or package's favour, even if a 
developer isn't currently using the latest Python release.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------


From ziade.tarek at gmail.com  Mon Jun  7 22:40:15 2010
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Mon, 7 Jun 2010 22:40:15 +0200
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
Message-ID: <AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>

On Mon, Jun 7, 2010 at 10:00 PM, Ian Bicking <ianb at colorstudy.com> wrote:
> On Mon, Jun 7, 2010 at 2:57 PM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
>>
>> Now for older versions of Python, I will provide a backport at PyPI,
>> so people can use
>> it under Python 2.x. This backport will probably be made with the
>> trunk so the 2.x line has the latest
>> code. IOW the latest 2.7 release might be more advanced than the one
>> provided in 3.3 for example,
>> but I don't see this as a problem.
>
> It means that, at least for pip, distutils2 3.3 would be effectively the
> last version.

To make sure it's clear: The latest would be 3.4 here, with its
backport in 2.7 and an older version
in 3.3.

> If there are important bugs we'll have to work around them.
> If there are added features we'll have to ignore them.

Not for the bug fixes because they will likely to be backported in all
versions. (3.3 and 2.7)

Now for new features, if pip uses the latest 2.x and the latest 3.x
versions, you will get them.
I am not sure why you would have to ignore them.  You would probably want to
use the new features when they are released, and still make your code
work with older versions.

This is not a new problem btw: if you want to support several versions
of Python, you have to
work around the differences.

Example: There's a big bug in tarfile in Python 2.4, and I had to
backport part of the 2.5 version for a while
in my 2.4 projects. That's doesn't mean I don't want tarfile to be in
the stdlib.

Tarek


From ziade.tarek at gmail.com  Mon Jun  7 22:42:44 2010
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Mon, 7 Jun 2010 22:42:44 +0200
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTim0wznVxRsgURSbPQPAcIHzYTjyyJ44xyXbPPl_@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<20100607215622.56961a58@pitrou.net>
	<AANLkTim0wznVxRsgURSbPQPAcIHzYTjyyJ44xyXbPPl_@mail.gmail.com>
Message-ID: <AANLkTilnjewzGKcGnAkoUgRnmUf0vrk2uPnojB8dYsM_@mail.gmail.com>

On Mon, Jun 7, 2010 at 10:09 PM, Ian Bicking <ianb at colorstudy.com> wrote:
[..]
>
> Also we have implicit dependency management and versioning already because a
> few libraries in the standard library have .egg-info files installed.

If you are talking about wsgiref, that's the only one that has the
egg-info in the
stdlib, and it will go away soonish (probably in 3.2) since:

- this format will be replaced by the new dist-info (PEP 376).
- this was a mistake imho to add it since wsgiref was not a
distribution in that case


>
> --
> Ian Bicking ?| ?http://blog.ianbicking.org
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> http://mail.python.org/mailman/listinfo/python-ideas
>
>



-- 
Tarek Ziad? | http://ziade.org


From eric at trueblade.com  Mon Jun  7 23:20:48 2010
From: eric at trueblade.com (Eric Smith)
Date: Mon, 07 Jun 2010 17:20:48 -0400
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>	<20100607201434.66d9bdbd@pitrou.net>	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>	<20100607205207.5532b939@pitrou.net>	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
Message-ID: <4C0D62B0.4020000@trueblade.com>

Tarek Ziad? wrote:
> On Mon, Jun 7, 2010 at 10:00 PM, Ian Bicking <ianb at colorstudy.com> wrote:
>> On Mon, Jun 7, 2010 at 2:57 PM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:
>> If there are important bugs we'll have to work around them.
>> If there are added features we'll have to ignore them.
> 
> Not for the bug fixes because they will likely to be backported in all
> versions. (3.3 and 2.7)
> 
> Now for new features, if pip uses the latest 2.x and the latest 3.x
> versions, you will get them.
> I am not sure why you would have to ignore them.  You would probably want to
> use the new features when they are released, and still make your code
> work with older versions.

There's no way for the new features to show up in 3.3, is there? You 
can't add them to a micro release, and you can't replace a module in the 
standard library. I think that's Ian's point.

pip could use the new features in 3.4, and it could get the new features 
in 2.x if the users were willing to install the updated library, since 
it's not in the stdlib. But for 3.3 you'd be stuck.

-- 
Eric.


From ianb at colorstudy.com  Mon Jun  7 23:22:56 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Mon, 7 Jun 2010 16:22:56 -0500
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com> 
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com> 
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com> 
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com> 
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com> 
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com> 
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
Message-ID: <AANLkTikbNoTHzRat0JBrWlC-jybw7SJBgsOlblq9CtBP@mail.gmail.com>

On Mon, Jun 7, 2010 at 3:40 PM, Tarek Ziad? <ziade.tarek at gmail.com> wrote:

> On Mon, Jun 7, 2010 at 10:00 PM, Ian Bicking <ianb at colorstudy.com> wrote:
> > On Mon, Jun 7, 2010 at 2:57 PM, Tarek Ziad? <ziade.tarek at gmail.com>
> wrote:
> >>
> >> Now for older versions of Python, I will provide a backport at PyPI,
> >> so people can use
> >> it under Python 2.x. This backport will probably be made with the
> >> trunk so the 2.x line has the latest
> >> code. IOW the latest 2.7 release might be more advanced than the one
> >> provided in 3.3 for example,
> >> but I don't see this as a problem.
> >
> > It means that, at least for pip, distutils2 3.3 would be effectively the
> > last version.
>
> To make sure it's clear: The latest would be 3.4 here, with its
> backport in 2.7 and an older version
> in 3.3.
>

The latest version pip could *depend on* would be 3.3, meaning all
subsequent releases would be relatively unimportant.  If you have to work
around a known-bad version of something, then all later versions become
liabilities instead of improvements (unless the workaround is egregiously
ugly).

I'd be okay saying that pip could require 3.4+, but not if it means Python
3.3 users would be excluded.

 > If there are important bugs we'll have to work around them.
> > If there are added features we'll have to ignore them.
>
> Not for the bug fixes because they will likely to be backported in all
> versions. (3.3 and 2.7)
>

In minor releases, which are new Python releases and take a long time to be
widely enough distributed to depend on the bug fix.  Though admittedly less
time than waiting for a major release to die.

Now for new features, if pip uses the latest 2.x and the latest 3.x
> versions, you will get them.
> I am not sure why you would have to ignore them.  You would probably want
> to
> use the new features when they are released, and still make your code
> work with older versions.
>

A feature that may or may not be available is not a useful feature.  We'll
just have to backport distutils2 features in an ad hoc way using conditional
imports and other nuisances.

This is not a new problem btw: if you want to support several versions
> of Python, you have to
> work around the differences.
>

Yes, but because the standard library changes so little it's not too bad,
and in some cases we can rely on backports, and otherwise we simply ignore
new functionality.

Example: There's a big bug in tarfile in Python 2.4, and I had to
> backport part of the 2.5 version for a while
> in my 2.4 projects. That's doesn't mean I don't want tarfile to be in
> the stdlib.
>

Instead of conditionally monkeypatching tarfile, I'd rather I just had a
known-good version of tarfile.  And maybe in a sense that is a solution; why
try to patch tarfile at all, why not just include swaths of the standard
library inline with libraries?  Right now typically in projects I've noticed
we carefully tease apart a libraries bugs when monkeypatching in an
upgrade... but that's probably not worth it.  OTOH, I don't think people
would be happy if I just included all of distutils2 in pip with some
sys.path magic to "upgrade" distutils2 as needed.  But then... it might be
the most reasonable approach.

Or another option, allow versioning of portions of the standard library as
need demands.  Versioned portions of the standard library may still be quite
constrained with respect to backward compatibility, but at least there would
be an orderly way to handle backports and for libraries to require bugfixes
instead of monkeypatching them in.  Maybe an additional constraint would be
that all new features have to be in new modules or via new names, and so
upgrades would be additive and less likely to affect backward
compatibility.  And we just keep bad stuff out of the standard library
(since perhaps the PyXML lessons is conflate a namespace issue with what was
simply a lot of bad code).

As an example of how this might have worked, unittest enhancements would
have been in a separate module or with as a TestCase subclass (TestCase2) or
something else highly unintrusive, and the result could be installed in any
version of Python with little danger of conflicts.  I.e., API versioning
(for the standard library only) gets pushed into module and class names and
isn't separate metadata.

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/23275974/attachment.html>

From solipsis at pitrou.net  Mon Jun  7 23:27:12 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 7 Jun 2010 23:27:12 +0200
Subject: [Python-ideas] Moving development out of the standard library
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<20100607215622.56961a58@pitrou.net>
	<AANLkTim0wznVxRsgURSbPQPAcIHzYTjyyJ44xyXbPPl_@mail.gmail.com>
Message-ID: <20100607232712.037570f0@pitrou.net>

On Mon, 7 Jun 2010 15:09:36 -0500
Ian Bicking <ianb at colorstudy.com> wrote:
> > I guess what I'm asking is: what is your line of reasoning?
> > You started with a contention that:
> >
> > ?There is no reason any new library or functionality should be tied to a
> > Python release?
> >
> > and, in my humble opinion, you failed to demonstrate that. In
> > particular, you haven't replied to my argument that it
> > dramatically eases dependency management.
> >
> 
> It only eases dependency management in closed systems with a fixed Python
> version.  If you support more than one Python version than you still have
> dependency management, it is just tied to ad hoc workarounds when there are
> compatibility problems, or ignoring new functionality.

We're misunderstanding each other. I'm talking about dependency
management for users (or application developers), you seem to be talking
about dependency management for library developers.

As for "ad hoc workarounds" and various "compatibility problems", they
wouldn't disappear if the stdlib became smaller; I have trouble
understanding what kind of solution you think would eliminate these
issues.

> The importance of "old" vs. "new" modules is that people tend to have a
> lowest version of Python they support, as simply a hard stop.

Yes, and it's the same for external modules too. For example, they will
support Twisted 8.0 and upper. So what's the difference?

> I am not
> aware of anything added since 2.5 that isn't backported or previously
> available as a separate library (I'm sure there's *something*, just nothing
> I can think of).

Really, I'm too lazy to go and read the changelogs, but there
definitely are many improvements that are not available in Python 2.5
and older.

> There is no clear policy about how backports are managed.

That's because, contrary to what you seem to think, external backports
are the exception and not the rule.

Regards

Antoine.




From ianb at colorstudy.com  Tue Jun  8 00:16:17 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Mon, 7 Jun 2010 17:16:17 -0500
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <20100607232712.037570f0@pitrou.net>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com> 
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com> 
	<20100607215622.56961a58@pitrou.net>
	<AANLkTim0wznVxRsgURSbPQPAcIHzYTjyyJ44xyXbPPl_@mail.gmail.com> 
	<20100607232712.037570f0@pitrou.net>
Message-ID: <AANLkTinNymcueXkd3tZaumxFJC_eXJeYvhsJENBq9TfT@mail.gmail.com>

On Mon, Jun 7, 2010 at 4:27 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:

>  > I am not
> > aware of anything added since 2.5 that isn't backported or previously
> > available as a separate library (I'm sure there's *something*, just
> nothing
> > I can think of).
>
>
> Really, I'm too lazy to go and read the changelogs, but there
> definitely are many improvements that are not available in Python 2.5
> and older.
>
> > There is no clear policy about how backports are managed.
>
> That's because, contrary to what you seem to think, external backports
> are the exception and not the rule.
>

I offered examples, you were too lazy to read the changelogs, your sweeping
declaration does not seem justified.

>From What's New in Python 2.6:

contextlib: attached to a language feature.
multiprocessing: backported
new string formatting: though a method, it'd be easy to produce in a module
form.  I'm not aware of a backport.
abstract base classes: I don't think this can be backported wouldn't all
kinds of contortions
ast: associated with the implementation/language
json: backported (implicitly, as it is simplejson)
plistlib: I'm guessing this was added to support packaging.  It exists
separately.

What's New in Python 2.7:
argparse: implicitly backported
changes to logging: not sure what will happen with this; the module has been
backported in the past
memoryview: not aware of it being backported
importlib: technically including it in 2.7 is a backport, but otherwise no
ttk: appears to be backported (http://pypi.python.org/pypi/pyttk/)
unittest: backported
ElementTree: backported

Digging deeper into 2.5:
functools: apparently backported at one time, now defunct
ctypes: the backport appears to be dead
sqlite3: actively developed (different name?)
wsgiref: backported
hashlib: backported

Every release there's some additions to collections, which have not been
backported.

So in summary, of 17 additions which seemed "backportable" to me (not
counting 3 modules that seemed tied to language features):
* 4 were not backported
* 3 have defunct or ambiguous backports
* 10 were backported

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/04a733d0/attachment.html>

From solipsis at pitrou.net  Tue Jun  8 00:32:30 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 08 Jun 2010 00:32:30 +0200
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTinNymcueXkd3tZaumxFJC_eXJeYvhsJENBq9TfT@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<20100607215622.56961a58@pitrou.net>
	<AANLkTim0wznVxRsgURSbPQPAcIHzYTjyyJ44xyXbPPl_@mail.gmail.com>
	<20100607232712.037570f0@pitrou.net>
	<AANLkTinNymcueXkd3tZaumxFJC_eXJeYvhsJENBq9TfT@mail.gmail.com>
Message-ID: <1275949950.3222.22.camel@localhost.localdomain>

Le lundi 07 juin 2010 ? 17:16 -0500, Ian Bicking a ?crit :
> I offered examples, you were too lazy to read the changelogs, your
> sweeping declaration does not seem justified.

Sure, but my sweeping declaration is justified by the fact that I'm a
daily contributor to Python core and know what kinds of things happen
here.

> What's New in Python 2.7:
[snip]

Your list seems to forget lots of module-specific improvements.
There are many more things in
http://docs.python.org/dev/whatsnew/2.7.html#new-and-improved-modules ,
and most of them aren't "backported" in any fashion.

> So in summary, of 17 additions which seemed "backportable" to me (not
> counting 3 modules that seemed tied to language features):
> * 4 were not backported
> * 3 have defunct or ambiguous backports
> * 10 were backported

Of course this all depends on your definition of "backportable". If you
remove that arbitrary conditional, the fact remains that most
improvements, small or big, don't get backported.

Regards

Antoine.




From fuzzyman at voidspace.org.uk  Tue Jun  8 00:45:21 2010
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Mon, 7 Jun 2010 23:45:21 +0100
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <4C0D62B0.4020000@trueblade.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com>
Message-ID: <AANLkTinhk8X8Q8Q8bMPJgUPdoCl9xlCdtA6mKJttvcyW@mail.gmail.com>

On 7 June 2010 22:20, Eric Smith <eric at trueblade.com> wrote:

> Tarek Ziad? wrote:
>
>> On Mon, Jun 7, 2010 at 10:00 PM, Ian Bicking <ianb at colorstudy.com> wrote:
>>
>>> On Mon, Jun 7, 2010 at 2:57 PM, Tarek Ziad? <ziade.tarek at gmail.com>
>>> wrote:
>>> If there are important bugs we'll have to work around them.
>>> If there are added features we'll have to ignore them.
>>>
>>
>> Not for the bug fixes because they will likely to be backported in all
>> versions. (3.3 and 2.7)
>>
>> Now for new features, if pip uses the latest 2.x and the latest 3.x
>> versions, you will get them.
>> I am not sure why you would have to ignore them.  You would probably want
>> to
>> use the new features when they are released, and still make your code
>> work with older versions.
>>
>
> There's no way for the new features to show up in 3.3, is there? You can't
> add them to a micro release, and you can't replace a module in the standard
> library. I think that's Ian's point.
>
>

But that's no different to pip using *any* standard library module. If you
want to support Python 2.4 you can't use os.path.relpath (or you have to
provide it yourself anyway) for example.

Michael


> pip could use the new features in 3.4, and it could get the new features in
> 2.x if the users were willing to install the updated library, since it's not
> in the stdlib. But for 3.3 you'd be stuck.
>
> --
> Eric.
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> http://mail.python.org/mailman/listinfo/python-ideas
>



-- 
http://www.voidspace.org.uk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/c14fda95/attachment.html>

From ianb at colorstudy.com  Tue Jun  8 03:49:46 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Mon, 7 Jun 2010 20:49:46 -0500
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTinhk8X8Q8Q8bMPJgUPdoCl9xlCdtA6mKJttvcyW@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com> 
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com> 
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com> 
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com> 
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com> 
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com> 
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com> 
	<4C0D62B0.4020000@trueblade.com>
	<AANLkTinhk8X8Q8Q8bMPJgUPdoCl9xlCdtA6mKJttvcyW@mail.gmail.com>
Message-ID: <AANLkTim2gJKAZtQgCLK473QhGlUyHdLMS3vOebU9M_N4@mail.gmail.com>

On Mon, Jun 7, 2010 at 5:45 PM, Michael Foord <fuzzyman at voidspace.org.uk>wrote:

> On 7 June 2010 22:20, Eric Smith <eric at trueblade.com> wrote:
>
>> Tarek Ziad? wrote:
>>
>>> On Mon, Jun 7, 2010 at 10:00 PM, Ian Bicking <ianb at colorstudy.com>
>>> wrote:
>>>
>>>> On Mon, Jun 7, 2010 at 2:57 PM, Tarek Ziad? <ziade.tarek at gmail.com>
>>>> wrote:
>>>> If there are important bugs we'll have to work around them.
>>>> If there are added features we'll have to ignore them.
>>>>
>>>
>>> Not for the bug fixes because they will likely to be backported in all
>>> versions. (3.3 and 2.7)
>>>
>>> Now for new features, if pip uses the latest 2.x and the latest 3.x
>>> versions, you will get them.
>>> I am not sure why you would have to ignore them.  You would probably want
>>> to
>>> use the new features when they are released, and still make your code
>>> work with older versions.
>>>
>>
>> There's no way for the new features to show up in 3.3, is there? You can't
>> add them to a micro release, and you can't replace a module in the standard
>> library. I think that's Ian's point.
>>
>
> But that's no different to pip using *any* standard library module. If you
> want to support Python 2.4 you can't use os.path.relpath (or you have to
> provide it yourself anyway) for example.
>

This is part of why I don't care about reforming or modifying what's in the
standard library now -- I know the constraints well, and they can't be
changed.  I'm solely concerned about new functionality which need not repeat
this pattern.

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100607/a283b21c/attachment.html>

From rrr at ronadam.com  Tue Jun  8 07:29:20 2010
From: rrr at ronadam.com (Ron Adam)
Date: Tue, 08 Jun 2010 00:29:20 -0500
Subject: [Python-ideas] stdlib upgrades
In-Reply-To: <AANLkTim1W9EMv3ZJqBt3I4GsSxq-VA1xyMf3lYTs302X@mail.gmail.com>
References: <AANLkTin3FPrbml8TjGvwKr3EQw1LpspcuqtB3aXxq9M-@mail.gmail.com>	<AANLkTikJ68fG0_om6Y6NYZ5SomnUrdkP_Vqpu8laeMPR@mail.gmail.com>	<4C0C92AE.5030105@ronadam.com>
	<AANLkTim1W9EMv3ZJqBt3I4GsSxq-VA1xyMf3lYTs302X@mail.gmail.com>
Message-ID: <4C0DD530.8000907@ronadam.com>



On 06/07/2010 09:37 AM, Bruce Frederiksen wrote:
> Or perhaps:
>
>    from experimental import new_module
>
> This is kind of a guarantee that the interface will change; since at
> some point, if new_module is "calcified", this will have to be changed
> to just:
>
>    import new_module
>
> For experimental language features, maybe:
>
>    from __experimental__ import new_feature
>
> This makes it clear that new_feature may change (perhaps even not be
> adapted?), vs the from __future__ semantics.
>
> Is it too complicated to try to differentiate between the decision of
> whether some capability will be provided or not, vs ironing out the API
> for that capability?
>
> For example,
>
>    from experimental import new_capability
>
> means that there is no commitment for new_capability at all -- it may
> simply be dropped entirely.  The danger of using this is that
> new_capability may simply disappear completely with no replacement.
>
> While,
>
>    from proposed import new_capability
>
> represents a commitment that new_capability will be provided at some
> point, but the API will likely change.  Here the danger of using it is
> that you will likely have to change your program to conform to a new API.
>
> A capability might start as "experimental", and if the value of it is
> demonstrated, move to "proposed" to work out the details before
> mainstreaming it.
>
> -Bruce

I was thinking of something a bit more limited in scope.  ie... only those 
modules already accepted for inclusion in a future release.  But you have 
the concept I was thinking of correct.

However experimental and proposed libraries is quite a lot more.  I think 
the svn sandbox directory sort of serves that purpose now although it isn't 
organized in any way that makes it easy for someone to tell what is what as 
far as being experimental, proposed, accepted and under active development, 
or something that is just lying around for future or past reference.

Maybe a bit of organizing the sandbox with categorized sub-folders would be 
good?

I'm really was just throwing out an idea in regard to limiting some of the 
problems of after the fact fixes or changes that can cause some problems. 
Ie.. give a new module a bit more exposure to a wider audience before its 
actually included.

Ron


> On Mon, Jun 7, 2010 at 2:33 AM, Ron Adam <rrr at ronadam.com
> <mailto:rrr at ronadam.com>> wrote:
>
>
>     On 06/01/2010 08:22 PM, Brett Cannon wrote:
>
>         I can only see two scenarios that might be considered acceptable to
>         address these issues.
>
>         One is that when new modules are accepted into the stdlib they are
>         flagged with a ExpermintalWarning so that people know that no
>         backwards-compatibility promises have been made yet. That gets the
>         module more exposure and gets python-dev real-world feedback to fix
>         issues before the module calcifies into a strong
>         backwards-compatibility. With that experience more proper decisions
>         can be made as to how to change things (e.g. the logging module's
>         default timestamp including microseconds which strptime cannot
>         parse).
>
>
>     Would it be possible to have a future_lib that gets enabled with
>     something like...
>
>        from __future__ import future_lib
>
>     These *new* library modules and packages won't be visible by
>     default. Maybe they stay there until the next major version or
>     possible some set period of time.
>
>     Ron
>
>     _______________________________________________
>     Python-ideas mailing list
>     Python-ideas at python.org <mailto:Python-ideas at python.org>
>     http://mail.python.org/mailman/listinfo/python-ideas
>
>


From ziade.tarek at gmail.com  Tue Jun  8 10:05:26 2010
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Tue, 8 Jun 2010 10:05:26 +0200
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <4C0D62B0.4020000@trueblade.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com>
Message-ID: <AANLkTimPoUwlGZrSk-IKlkuJEHV2fRzhS08QCxbCOMIG@mail.gmail.com>

On Mon, Jun 7, 2010 at 11:20 PM, Eric Smith <eric at trueblade.com> wrote:
> Tarek Ziad? wrote:
>>
>> On Mon, Jun 7, 2010 at 10:00 PM, Ian Bicking <ianb at colorstudy.com> wrote:
>>>
>>> On Mon, Jun 7, 2010 at 2:57 PM, Tarek Ziad? <ziade.tarek at gmail.com>
>>> wrote:
>>> If there are important bugs we'll have to work around them.
>>> If there are added features we'll have to ignore them.
>>
>> Not for the bug fixes because they will likely to be backported in all
>> versions. (3.3 and 2.7)
>>
>> Now for new features, if pip uses the latest 2.x and the latest 3.x
>> versions, you will get them.
>> I am not sure why you would have to ignore them. ?You would probably want
>> to
>> use the new features when they are released, and still make your code
>> work with older versions.
>
> There's no way for the new features to show up in 3.3, is there? You can't
> add them to a micro release, and you can't replace a module in the standard
> library. I think that's Ian's point.

Yes, I understood that. My point is that you can adapt your software
when the Python
version you use is an old version, and you don't have the latest feature.

That's how we work with all modules/packages from the stdlib, and features are
added at every Python version. The stdlib is not frozen.

>
> pip could use the new features in 3.4, and it could get the new features in
> 2.x if the users were willing to install the updated library, since it's not
> in the stdlib. But for 3.3 you'd be stuck.

Not stuck, but you will definitely need to deal with it in your
project. And this is not new.
That's why I have suggested earlier authorizing partial updates of the stdlib,
but it seemed that the idea was disliked because of the complexity it would
bring.

Regards
Tarek

>
> --
> Eric.
>



-- 
Tarek Ziad? | http://ziade.org


From ziade.tarek at gmail.com  Tue Jun  8 10:08:04 2010
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Tue, 8 Jun 2010 10:08:04 +0200
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTim2gJKAZtQgCLK473QhGlUyHdLMS3vOebU9M_N4@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com>
	<AANLkTinhk8X8Q8Q8bMPJgUPdoCl9xlCdtA6mKJttvcyW@mail.gmail.com>
	<AANLkTim2gJKAZtQgCLK473QhGlUyHdLMS3vOebU9M_N4@mail.gmail.com>
Message-ID: <AANLkTilTIiqseepQnlPH0AMOuUL9GirUFWPRLO3dMjWt@mail.gmail.com>

On Tue, Jun 8, 2010 at 3:49 AM, Ian Bicking <ianb at colorstudy.com> wrote:
> On Mon, Jun 7, 2010 at 5:45 PM, Michael Foord <fuzzyman at voidspace.org.uk>
> wrote:
>>
>> On 7 June 2010 22:20, Eric Smith <eric at trueblade.com> wrote:
>>>
>>> Tarek Ziad? wrote:
>>>>
>>>> On Mon, Jun 7, 2010 at 10:00 PM, Ian Bicking <ianb at colorstudy.com>
>>>> wrote:
>>>>>
>>>>> On Mon, Jun 7, 2010 at 2:57 PM, Tarek Ziad? <ziade.tarek at gmail.com>
>>>>> wrote:
>>>>> If there are important bugs we'll have to work around them.
>>>>> If there are added features we'll have to ignore them.
>>>>
>>>> Not for the bug fixes because they will likely to be backported in all
>>>> versions. (3.3 and 2.7)
>>>>
>>>> Now for new features, if pip uses the latest 2.x and the latest 3.x
>>>> versions, you will get them.
>>>> I am not sure why you would have to ignore them. ?You would probably
>>>> want to
>>>> use the new features when they are released, and still make your code
>>>> work with older versions.
>>>
>>> There's no way for the new features to show up in 3.3, is there? You
>>> can't add them to a micro release, and you can't replace a module in the
>>> standard library. I think that's Ian's point.
>>
>> But that's no different to pip using *any* standard library module. If you
>> want to support Python 2.4 you can't use os.path.relpath (or you have to
>> provide it yourself anyway) for example.
>
> This is part of why I don't care about reforming or modifying what's in the
> standard library now -- I know the constraints well, and they can't be
> changed.? I'm solely concerned about new functionality which need not repeat
> this pattern.

Are you suggesting to freeze the stdlib development ? So you don't
have to deal with
different Python version at your level ?

If so, that doesn't sound right because making the "batteries included" evolve
is part (imho) of the Python spirit, and the constraints we are
talking about right now
is not a huge problem as you seem to think in my opinion. I don't find it
extremely hard to cope with various Python version.



>
> --
> Ian Bicking ?| ?http://blog.ianbicking.org
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> http://mail.python.org/mailman/listinfo/python-ideas
>
>



-- 
Tarek Ziad? | http://ziade.org


From tjreedy at udel.edu  Tue Jun  8 21:33:30 2010
From: tjreedy at udel.edu (Terry Reedy)
Date: Tue, 08 Jun 2010 15:33:30 -0400
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <4C0D62B0.4020000@trueblade.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>	<20100607201434.66d9bdbd@pitrou.net>	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>	<20100607205207.5532b939@pitrou.net>	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com>
Message-ID: <hum5u9$fl9$1@dough.gmane.org>

On 6/7/2010 5:20 PM, Eric Smith wrote:

> pip could use the new features in 3.4, and it could get the new features
> in 2.x if the users were willing to install the updated library, since
> it's not in the stdlib. But for 3.3 you'd be stuck.

I see no reason why 3.3 users could not also download the 3.4 version of 
something just as well as user of earlier versions (who well might 
already have the 3.3 version).




From anfedorov at gmail.com  Sun Jun  6 22:40:36 2010
From: anfedorov at gmail.com (Andrey Fedorov)
Date: Sun, 6 Jun 2010 16:40:36 -0400
Subject: [Python-ideas] @setattr(obj, [name])
In-Reply-To: <AANLkTimtybi_8DsQ_PmTrSi8MUZrJcT9gyMZF2ab3RW4@mail.gmail.com>
References: <AANLkTikYSlC8kBqfa_7RWg_9y9IQYkXL1BZ0QLcW4FsO@mail.gmail.com> 
	<AANLkTilKn80Vjkl2WNecN6vw73vuz7yvk4k3RqwJ_XqS@mail.gmail.com> 
	<AANLkTimtybi_8DsQ_PmTrSi8MUZrJcT9gyMZF2ab3RW4@mail.gmail.com>
Message-ID: <AANLkTik8-5LeCz0pPK69qEiElfczjAFRKmgfUwzB3_es@mail.gmail.com>

George Sakkis wrote:

> Still "method_of" is not quite right either since it can also be used as a
> class decorator
>

Great point.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100606/15a69599/attachment.html>

From ncoghlan at gmail.com  Tue Jun  8 22:56:35 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 09 Jun 2010 06:56:35 +1000
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <hum5u9$fl9$1@dough.gmane.org>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>	<20100607201434.66d9bdbd@pitrou.net>	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>	<20100607205207.5532b939@pitrou.net>	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>	<4C0D62B0.4020000@trueblade.com>
	<hum5u9$fl9$1@dough.gmane.org>
Message-ID: <4C0EAE83.30300@gmail.com>

On 09/06/10 05:33, Terry Reedy wrote:
> On 6/7/2010 5:20 PM, Eric Smith wrote:
>
>> pip could use the new features in 3.4, and it could get the new features
>> in 2.x if the users were willing to install the updated library, since
>> it's not in the stdlib. But for 3.3 you'd be stuck.
>
> I see no reason why 3.3 users could not also download the 3.4 version of
> something just as well as user of earlier versions (who well might
> already have the 3.3 version).

I believe Eric was pointing out the fact that, by default, there is no 
directory on sys.path that will override the standard library versions 
of a module or package for all applications using that interpreter 
installation.

So you're forced to either resort to destructive replacement (actually 
overwriting the standard library module on disk) or else tinkering with 
sys.path in each app or a library to insert an "override" directory 
before the normal standard library paths.

It sometimes seems to me that, for the advocates of a more granular 
standard library, proposing standardisation of such an override 
directory would be an interesting way to test the waters (since it would 
make it much easier to drop in backported updates to standard library 
modules).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------


From ianb at colorstudy.com  Tue Jun  8 23:01:34 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Tue, 8 Jun 2010 16:01:34 -0500
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <4C0EAE83.30300@gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com> 
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com> 
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com> 
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com> 
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com> 
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com> 
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com> 
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org> 
	<4C0EAE83.30300@gmail.com>
Message-ID: <AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>

On Tue, Jun 8, 2010 at 3:56 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:

> On 09/06/10 05:33, Terry Reedy wrote:
>
>> On 6/7/2010 5:20 PM, Eric Smith wrote:
>>
>>  pip could use the new features in 3.4, and it could get the new features
>>> in 2.x if the users were willing to install the updated library, since
>>> it's not in the stdlib. But for 3.3 you'd be stuck.
>>>
>>
>> I see no reason why 3.3 users could not also download the 3.4 version of
>> something just as well as user of earlier versions (who well might
>> already have the 3.3 version).
>>
>
> I believe Eric was pointing out the fact that, by default, there is no
> directory on sys.path that will override the standard library versions of a
> module or package for all applications using that interpreter installation.
>
> So you're forced to either resort to destructive replacement (actually
> overwriting the standard library module on disk) or else tinkering with
> sys.path in each app or a library to insert an "override" directory before
> the normal standard library paths.
>
> It sometimes seems to me that, for the advocates of a more granular
> standard library, proposing standardisation of such an override directory
> would be an interesting way to test the waters (since it would make it much
> easier to drop in backported updates to standard library modules).
>

Setuptools uses .pth file hackery to handle this case (not to intentionally
override the stdlib, but it would also do that), and I think it could apply
here as well.  Also the way Setuptools installs eggs might be useful in this
case, as it makes it very obvious in tracebacks what version of a module is
being used (and if it is from the standard library).  Even if the particular
mechanics were revisited, I think these basic ideas would be helpful.

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100608/932a96d9/attachment.html>

From ziade.tarek at gmail.com  Tue Jun  8 23:08:57 2010
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Tue, 8 Jun 2010 23:08:57 +0200
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org>
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>
Message-ID: <AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>

On Tue, Jun 8, 2010 at 11:01 PM, Ian Bicking <ianb at colorstudy.com> wrote:
> On Tue, Jun 8, 2010 at 3:56 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
>>
>> On 09/06/10 05:33, Terry Reedy wrote:
>>>
>>> On 6/7/2010 5:20 PM, Eric Smith wrote:
>>>
>>>> pip could use the new features in 3.4, and it could get the new features
>>>> in 2.x if the users were willing to install the updated library, since
>>>> it's not in the stdlib. But for 3.3 you'd be stuck.
>>>
>>> I see no reason why 3.3 users could not also download the 3.4 version of
>>> something just as well as user of earlier versions (who well might
>>> already have the 3.3 version).
>>
>> I believe Eric was pointing out the fact that, by default, there is no
>> directory on sys.path that will override the standard library versions of a
>> module or package for all applications using that interpreter installation.
>>
>> So you're forced to either resort to destructive replacement (actually
>> overwriting the standard library module on disk) or else tinkering with
>> sys.path in each app or a library to insert an "override" directory before
>> the normal standard library paths.
>>
>> It sometimes seems to me that, for the advocates of a more granular
>> standard library, proposing standardisation of such an override directory
>> would be an interesting way to test the waters (since it would make it much
>> easier to drop in backported updates to standard library modules).
>
> Setuptools uses .pth file hackery to handle this case (not to intentionally
> override the stdlib, but it would also do that), and I think it could apply
> here as well.? Also the way Setuptools installs eggs might be useful in this
> case, as it makes it very obvious in tracebacks what version of a module is
> being used (and if it is from the standard library).? Even if the particular
> mechanics were revisited, I think these basic ideas would be helpful.

The problem is, any project would start overriding the stdlib to fix
things or change
some behavior, unless this is somehow controlled.

In the proposal I've made earlier to update partially the stdlib,
I've proposed to have a specific area at PyPI for distributions that
are "blessed"
to override the stdlib packages/modules.


From stephen at xemacs.org  Wed Jun  9 02:30:33 2010
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Wed, 09 Jun 2010 09:30:33 +0900
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org>
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>
Message-ID: <878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>

Tarek Ziad? writes:

 > The problem is, any project would start overriding the stdlib to
 > fix things or change some behavior, unless this is somehow
 > controlled.

But in the end, that's precisely what you propose to do yourself with
"partial stdlib upgrades"!

It's just that you trust yourself more than you trust "any project".
But that just doesn't fly from the point of the third party clients of
the stdlib.  Either stability of any particular version's stdlib
applies to the stdlib developers too, or it doesn't really apply at
all.



From ziade.tarek at gmail.com  Wed Jun  9 09:17:59 2010
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Wed, 9 Jun 2010 09:17:59 +0200
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org>
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>

On Wed, Jun 9, 2010 at 2:30 AM, Stephen J. Turnbull <stephen at xemacs.org> wrote:
> Tarek Ziad? writes:
>
> ?> The problem is, any project would start overriding the stdlib to
> ?> fix things or change some behavior, unless this is somehow
> ?> controlled.
>
> But in the end, that's precisely what you propose to do yourself with
> "partial stdlib upgrades"!
>
> It's just that you trust yourself more than you trust "any project".
> But that just doesn't fly from the point of the third party clients of
> the stdlib. ?Either stability of any particular version's stdlib
> applies to the stdlib developers too, or it doesn't really apply at
> all.

If the maintainer of unittest for example, provides an upgrade for this
package, don't you think we can trust that he will provide a more
stable upgrade for the unittest package in the stdlib than another
project that would implement a unittest package ?

So no, I don't think you can compare a potential upgrade
from a stdlib package maintainer with an upgrade issued from
someone else.

So, by "controlled" I mean releasing official upgrades of the stdlib,
people know they were built by the same maintainers.

>
>



-- 
Tarek Ziad? | http://ziade.org


From stephen at xemacs.org  Wed Jun  9 10:25:06 2010
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Wed, 09 Jun 2010 17:25:06 +0900
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org>
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>
Message-ID: <87zkz4fw9p.fsf@uwakimon.sk.tsukuba.ac.jp>

Tarek Ziad? writes:

 > If the maintainer of unittest for example, provides an upgrade for this
 > package, don't you think we can trust that he will provide a more
 > stable upgrade for the unittest package in the stdlib than another
 > project that would implement a unittest package ?

For the users who really care about this, it's not a question of
relative stability.

Either the only changes in documented behavior involve successful
completion of jobs that used to fail, or instability has been
introduced.  For many people (though a small fraction) that is *very
bad*, and they have complained vociferously in the past.

I really don't understand where the big benefits are to having minor
improvements introduced in bugfix releases.  People who want those
benefits should upgrade to a more recent series.  The people who
really need them but must stick to an older series for Python itself
can get the most recent version of the few packages that have
"must-have" improvements from PyPI.

"No behavior changes in micro releases" is an easily understood,
reasonably easily followed policy.  The policy you propose requires
judgment calls that will differ from module maintainer to module
maintainer, and every upgrade will involve discussion on python-dev.
Yuck.



From ziade.tarek at gmail.com  Wed Jun  9 11:05:15 2010
From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=)
Date: Wed, 9 Jun 2010 11:05:15 +0200
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <87zkz4fw9p.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org>
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>
	<87zkz4fw9p.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>

On Wed, Jun 9, 2010 at 10:25 AM, Stephen J. Turnbull <stephen at xemacs.org> wrote:
[..]
>
> I really don't understand where the big benefits are to having minor
> improvements introduced in bugfix releases.

To try to find a solution to the problems described in this thread earlier.

If I summarize so far the threads as I understood them, people don't
want to rely
on stdlib packages because their release cycles are too slow for what
they want/need to do with it *today*. A package that enters the stdlib
suffers from being slowed down. That's also a huge benefit for many reasons:
stability, blessing, etc.

The initial reason is that Ian doesn't want Pip do depend on distutils2 if
it's in the stdlib, because he will have to cope with various versions
of Python to make sure his users will have the same set of features I guess.

So he needs to provide his own backports of any new distutils2 features.

If we can find a way to facilitate this work, that would be great. IOW, if
we can provide somehow a backport of these features so some projects
can use it no matter what the python version is...

And "not putting distutils2 in the stdlib" is not the solution because
this is a problem
for all packages in there.

That's exactly what unitest currently do (but with a new name "unittest2")
and as soon as Python 2.7 final will be out, unittest will have the
same problem:
it won't be able to backport new features anymore under the same namespace.


[..]
> "No behavior changes in micro releases" is an easily understood,
> reasonably easily followed policy. ?The policy you propose requires
> judgment calls that will differ from module maintainer to module
> maintainer, and every upgrade will involve discussion on python-dev.
> Yuck.

This idea I have was kind of rejected earlier in Python-ideas, if you
read back the
two/three latest threads on the topic. I have reintroduced here because
someone proposed to allow overriding stdlib packages. If this was to happen,
I'd rather have upgrades decided in python-dev than arbitrary ones.
But yes, that's too much burden...

Maybe the status quo would is the best idea :)

Regards
Tarek


From konryd at gmail.com  Wed Jun  9 11:20:14 2010
From: konryd at gmail.com (Konrad Delong)
Date: Wed, 9 Jun 2010 11:20:14 +0200
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTim0wznVxRsgURSbPQPAcIHzYTjyyJ44xyXbPPl_@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com> 
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com> 
	<20100607215622.56961a58@pitrou.net>
	<AANLkTim0wznVxRsgURSbPQPAcIHzYTjyyJ44xyXbPPl_@mail.gmail.com>
Message-ID: <AANLkTik2syjva_o1Dvothtk0mU-xqQ8ahnr3BM7jtird@mail.gmail.com>

On 7 June 2010 22:09, Ian Bicking <ianb at colorstudy.com> wrote:
> On Mon, Jun 7, 2010 at 2:56 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>>
>> On Mon, 7 Jun 2010 14:20:41 -0500
>> Ian Bicking <ianb at colorstudy.com> wrote:
>> > On Mon, Jun 7, 2010 at 1:52 PM, Antoine Pitrou
>> > <solipsis-xNDA5Wrcr86sTnJN9+BGXg at public.gmane.org> wrote:
>> >
>> > > ?> I say there is consensus because as far as I know anything
>> > > substantial
>> > > has a
>> > > > maintained version outside the standard library; argparse is
>> > > > implicitly,
>> > > > unittest is unittest2, ElementTree always has maintained a separate
>> > > > existence, simplejson implicitly.
>> > >
>> > >
>> > > "Anything substantial" is more than exagerated. The modules you are
>> > > mentioning are exceptions, two of which may even be temporary
>> > > (argparse
>> > > and unittest2). Most sdtlib modules don't have external releases, and
>> > > many of them are still "substantial".
>> > >
>> >
>> > Most other modules are very old.
>>
>> Well, even if that's true (I haven't checked and I guess we wouldn't
>> agree on the meaning of "old"), so what?
>> I guess what I'm asking is: what is your line of reasoning?
>> You started with a contention that:
>>
>> ?There is no reason any new library or functionality should be tied to a
>> Python release?
>>
>> and, in my humble opinion, you failed to demonstrate that. In
>> particular, you haven't replied to my argument that it
>> dramatically eases dependency management.
>
> It only eases dependency management in closed systems with a fixed Python
> version.? If you support more than one Python version than you still have
> dependency management, it is just tied to ad hoc workarounds when there are
> compatibility problems, or ignoring new functionality.
>
> The importance of "old" vs. "new" modules is that people tend to have a
> lowest version of Python they support, as simply a hard stop.? This is
> currently Python 2.5 for most people, 2.4 for some groups (and just a very
> few stragglers with 2.3).? So long as you never use anything beyond what 2.5
> provides then it's okay, which is most of the standard library.? I am not
> aware of anything added since 2.5 that isn't backported or previously
> available as a separate library (I'm sure there's *something*, just nothing
> I can think of).
>
> There is no clear policy about how backports are managed.

Which gives me an idea:

What if all the backports were managed within a single PyPI package,
(e.g. backport26, backport27) with clear policy on older Python
versions supported.

then I could write in my py2.4 script:

from backport26.os.path import relpath

Konrad


From ncoghlan at gmail.com  Wed Jun  9 13:08:13 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 09 Jun 2010 21:08:13 +1000
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>	<20100607205207.5532b939@pitrou.net>	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>	<4C0D62B0.4020000@trueblade.com>
	<hum5u9$fl9$1@dough.gmane.org>	<4C0EAE83.30300@gmail.com>	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>	<87zkz4fw9p.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>
Message-ID: <4C0F761D.80600@gmail.com>

On 09/06/10 19:05, Tarek Ziad? wrote:
> And "not putting distutils2 in the stdlib" is not the solution because
> this is a problem
> for all packages in there.
>
> That's exactly what unitest currently do (but with a new name "unittest2")
> and as soon as Python 2.7 final will be out, unittest will have the
> same problem:
> it won't be able to backport new features anymore under the same namespace.

Something we may want to seriously consider is maintaining parallel 
releases of packages indefinitely when the benefits are deemed to 
justify the additional overheads.

So, for example, many of the current features of unittest2 will be 
available in unittest in 2.7 and 3.2. This makes those features 
available to those that rely almost entirely on standard library 
functionality rather than third party packages. In the meantime, users 
of previous versions of Python can already use unittest2, and that 
package will work *without name conflicts* in both 2.7 and 3.2, even 
though many of its features have been added to the standard library.

unittest2 may then even go through a few external releases before being 
synced up again with the standard library's unittest when 3.3 comes around.

Something similar may turn out to be a good idea for distutils2: rather 
than consider the anticipated merge back into distutils prior to 3.3 the 
end of the road, instead continue to use distutils2 to release faster 
updates while evolving the API design towards 3.4.

Users then have the choice - the solid, stable standard library version, 
or the distinctly named, more rapidly updated PyPI version.

As others have suggested, this namespace separation approach could be 
standardised through the use of a PEP 382 namespace package so that 
users could choose between (e.g.) "unittest" and "distutils" and 
"cutting_edge.unittest" and "cutting_edge.distutils" (with the latter 
being the regularly updated, new features and all, PyPI versions, and 
the former the traditional stable API, bugfix-only standard library 
versions). That would probably be an improvement over the current ad hoc 
approach to naming separation for standard library updates.

I don't see any way to ever resolve the two competing goal sets 
(stability vs latest features) without permanently maintaining separate 
namespaces.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------


From stephen at xemacs.org  Wed Jun  9 13:18:12 2010
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Wed, 9 Jun 2010 20:18:12 +0900
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTik2syjva_o1Dvothtk0mU-xqQ8ahnr3BM7jtird@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<20100607215622.56961a58@pitrou.net>
	<AANLkTim0wznVxRsgURSbPQPAcIHzYTjyyJ44xyXbPPl_@mail.gmail.com>
	<AANLkTik2syjva_o1Dvothtk0mU-xqQ8ahnr3BM7jtird@mail.gmail.com>
Message-ID: <19471.30836.817195.129782@uwakimon.sk.tsukuba.ac.jp>

Konrad Delong writes:

 > What if all the backports were managed within a single PyPI package,
 > (e.g. backport26, backport27) with clear policy on older Python
 > versions supported.

This is an interesting idea.  I thought about something similar, but I
decided that this would basically end up being the same problem as
managing the stdlib (I could be wrong, of course, but I'm reasonably
confident of that :-), while decreasing returns would have long since
set in.  Ie,q even though the effort would probably be quite a bit
smaller than managing the stdlib itself, the benefits would decrease
more rapidly (that's just a wild-ass guess, and since you bring it up,
I'm curious to see what folks will say).



From stephen at xemacs.org  Wed Jun  9 13:22:14 2010
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Wed, 9 Jun 2010 20:22:14 +0900
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org>
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>
	<87zkz4fw9p.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>
Message-ID: <19471.31078.149844.95310@uwakimon.sk.tsukuba.ac.jp>

Tarek Ziad? writes:

 > Maybe the status quo would is the best idea :)

+1 to that, and I think your post stated the issues concisely and
fairly.  (I add that since I guess you tend to lean the other way; I
don't want to claim you're arguing for my position. :-)

Regards,


From dstanek at dstanek.com  Wed Jun  9 13:34:56 2010
From: dstanek at dstanek.com (David Stanek)
Date: Wed, 9 Jun 2010 07:34:56 -0400
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <4C0F761D.80600@gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org>
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>
	<87zkz4fw9p.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>
	<4C0F761D.80600@gmail.com>
Message-ID: <AANLkTikcvc-1sVySF8uuNRWCJJEESnLAPiMcFUSVTFpt@mail.gmail.com>

On Wed, Jun 9, 2010 at 7:08 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On 09/06/10 19:05, Tarek Ziad? wrote:
>>
>> And "not putting distutils2 in the stdlib" is not the solution because
>> this is a problem
>> for all packages in there.
>>
>> That's exactly what unitest currently do (but with a new name "unittest2")
>> and as soon as Python 2.7 final will be out, unittest will have the
>> same problem:
>> it won't be able to backport new features anymore under the same
>> namespace.
>
> Something we may want to seriously consider is maintaining parallel releases
> of packages indefinitely when the benefits are deemed to justify the
> additional overheads.
>

I had a very similar thought. Why not have all the real development of
those packages happen outside of the standard lib and just grab the
latest stable version when cutting a new version of Python.

Namespaces are fine, but I'd be happy enough with a way for these
packages to show up on the Python path before the stdlib.

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek


From solipsis at pitrou.net  Wed Jun  9 13:38:43 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 9 Jun 2010 13:38:43 +0200
Subject: [Python-ideas] Moving development out of the standard library
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org>
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>
	<87zkz4fw9p.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>
	<4C0F761D.80600@gmail.com>
	<AANLkTikcvc-1sVySF8uuNRWCJJEESnLAPiMcFUSVTFpt@mail.gmail.com>
Message-ID: <20100609133843.14c6c19f@pitrou.net>

On Wed, 9 Jun 2010 07:34:56 -0400
David Stanek <dstanek at dstanek.com> wrote:
> 
> I had a very similar thought. Why not have all the real development of
> those packages happen outside of the standard lib and just grab the
> latest stable version when cutting a new version of Python.

-1.  We have had too much trouble with externally-maintained modules
such as elementtree and json.  The Python SVN (or hg) tree should be the
primary place where development takes place.

Regards

Antoine.




From ncoghlan at gmail.com  Wed Jun  9 14:13:51 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 09 Jun 2010 22:13:51 +1000
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTikcvc-1sVySF8uuNRWCJJEESnLAPiMcFUSVTFpt@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>	<4C0D62B0.4020000@trueblade.com>	<hum5u9$fl9$1@dough.gmane.org>	<4C0EAE83.30300@gmail.com>	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>	<87zkz4fw9p.fsf@uwakimon.sk.tsukuba.ac.jp>	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>	<4C0F761D.80600@gmail.com>
	<AANLkTikcvc-1sVySF8uuNRWCJJEESnLAPiMcFUSVTFpt@mail.gmail.com>
Message-ID: <4C0F857F.4060509@gmail.com>

On 09/06/10 21:34, David Stanek wrote:
> On Wed, Jun 9, 2010 at 7:08 AM, Nick Coghlan<ncoghlan at gmail.com>  wrote:
>> Something we may want to seriously consider is maintaining parallel releases
>> of packages indefinitely when the benefits are deemed to justify the
>> additional overheads.
>>
>
> I had a very similar thought. Why not have all the real development of
> those packages happen outside of the standard lib and just grab the
> latest stable version when cutting a new version of Python.
>
> Namespaces are fine, but I'd be happy enough with a way for these
> packages to show up on the Python path before the stdlib.

Another parallel that occurs to me is the stable/testing/unstable 
distinction in Debian - if you want rock solid (but old) you stick with 
stable, if you want comparatively cutting edge you go with unstable and 
testing splits the difference.

Ubuntu's "normal release" vs "Long Term Support release" distinction is 
also worth thinking about. Most home desktop users will upgrade every 6 
months (modulo rocky transitions like the upgrade to KDE 4), while 
corporate users will wait for the next LTS to arrive.

To bring that back to a Python context and revisit the proposal that 
Guido rejected for biennial Python releases with annual standard library 
releases, suppose that, instead of the standard library itself having 
annual releases, there was a python-dev maintained (or even third-party 
maintained, python-dev blessed) "pybackports" project.

The idea being, that package would not only make many new standard 
library features of the current Python release available on widely used 
previous versions, but also make upcoming features of the *next* Python 
release available on the current version. The periods of support would 
be shorter to reflect the faster release

Third party libraries and applications could then either target the 
"rock solid" market and depend solely on the standard library, or go for 
the "latest and greatest" crowd and require pybackports.

Make no mistake, something like this would be quite a bit of work, but 
it would be far from impossible and could go a long way towards fighting 
the impression that the standard library is the place where modules go 
to die.

TLDR version:

Year A: Python 3.x release, pybackports 3.x.0 release
Year B: No Python release, pybackports 3.x.5 release (with 3.y features)
Year C: Python 3.y release, pybackports 3.y.0 release
Year D: No Python release, pybackports 3.y.5 release (with 3.z features)
...etc

Unlike Python itself, Pybackports would not provide bugfix support for 
prior releases (since it would only be for the latest-and-greatest 
crowd, those that want the longer support should stick with the standard 
library).

Something to think about, anyway.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------


From p.f.moore at gmail.com  Wed Jun  9 14:40:37 2010
From: p.f.moore at gmail.com (Paul Moore)
Date: Wed, 9 Jun 2010 13:40:37 +0100
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTik2syjva_o1Dvothtk0mU-xqQ8ahnr3BM7jtird@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>
	<20100607215622.56961a58@pitrou.net>
	<AANLkTim0wznVxRsgURSbPQPAcIHzYTjyyJ44xyXbPPl_@mail.gmail.com>
	<AANLkTik2syjva_o1Dvothtk0mU-xqQ8ahnr3BM7jtird@mail.gmail.com>
Message-ID: <AANLkTin0EnHxhDZD6hoCBR9f_-GPm1Rj8eRvVgPDHPUt@mail.gmail.com>

On 9 June 2010 10:20, Konrad Delong <konryd at gmail.com> wrote:
> Which gives me an idea:
>
> What if all the backports were managed within a single PyPI package,
> (e.g. backport26, backport27) with clear policy on older Python
> versions supported.
>
> then I could write in my py2.4 script:
>
> from backport26.os.path import relpath

It's not a bad idea. A key benefit seems to be that it can be done by
anyone, whether or not they are a core developer. So it can be set up
right now, without taking up any of the limited core-dev resource.

Of course, conversely, the disadvantage is that nobody's done this
already, implying that either nobody's thought of it before or there's
actually little motivation for someone to put the work into such a
solution :-) If nobody's thought of it, you may be lucky and someone
will pick up the idea and run with it.

Paul.


From solipsis at pitrou.net  Wed Jun  9 14:40:57 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 9 Jun 2010 14:40:57 +0200
Subject: [Python-ideas] Moving development out of the standard library
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org>
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>
	<87zkz4fw9p.fsf@uwakimon .sk.tsukuba.ac.jp>
	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>
	<4C0F761D.80600@gmail.com>
	<AANLkTikcvc-1sVySF8uuNRWCJJEESnLAPiMcFUSVTFpt@mail.gmail.com>
	<4C0F857F.4060509@gmail.com>
Message-ID: <20100609144057.65e92226@pitrou.net>

On Wed, 09 Jun 2010 22:13:51 +1000
Nick Coghlan <ncoghlan at gmail.com> wrote:
> 
> Make no mistake, something like this would be quite a bit of work, but 
> it would be far from impossible and could go a long way towards fighting 
> the impression that the standard library is the place where modules go 
> to die.

Isn't that "impression" largely constructed, and propagated by a
limited number of people who apparently don't like the very idea of a
"batteries included" stdlib?  There has been an amount of anti-stdlib
activism (including in this thread) that I find both antagonizing and
unconstructive.  Outside of that vocal minority, there doesn't seem to
be that much criticism against the stdlib.

The reality is that there are regularly feature requests on the tracker,
and many of them get accepted and committed (of course, when no
patch is submitted and no core developer is interested, things have a
tendency to linger on; but it's the same for outside libraries too;
take a look at bug trackers for e.g. nose or twisted, and you'll see
many open entries that have been resting for years).

Regards

Antoine.




From konryd at gmail.com  Wed Jun  9 14:59:50 2010
From: konryd at gmail.com (Konrad Delong)
Date: Wed, 9 Jun 2010 14:59:50 +0200
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTin0EnHxhDZD6hoCBR9f_-GPm1Rj8eRvVgPDHPUt@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<20100607201434.66d9bdbd@pitrou.net>
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com> 
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com> 
	<20100607215622.56961a58@pitrou.net>
	<AANLkTim0wznVxRsgURSbPQPAcIHzYTjyyJ44xyXbPPl_@mail.gmail.com> 
	<AANLkTik2syjva_o1Dvothtk0mU-xqQ8ahnr3BM7jtird@mail.gmail.com> 
	<AANLkTin0EnHxhDZD6hoCBR9f_-GPm1Rj8eRvVgPDHPUt@mail.gmail.com>
Message-ID: <AANLkTikFvi2dzWwPvWSWwjtvFyZiftdKnSkUxSzQX6EK@mail.gmail.com>

>> then I could write in my py2.4 script:
>>
>> from backport26.os.path import relpath
>
> It's not a bad idea. A key benefit seems to be that it can be done by
> anyone, whether or not they are a core developer. So it can be set up
> right now, without taking up any of the limited core-dev resource.
>
> Of course, conversely, the disadvantage is that nobody's done this
> already, implying that either nobody's thought of it before or there's
> actually little motivation for someone to put the work into such a
> solution :-)

Yeah, I am aware of that :-)
Another question is whether such a package is going to find use. I
doubt Michael would introduce a dependency into unittest2 just to get
functools.wrap and os.path.relpath out of his code. Distutils2
contains a whole _backport module [1] which could go away, but again:
at the cost of introducing dependency into the package.

Konrad

[1] http://bitbucket.org/tarek/distutils2/src/tip/src/distutils2/_backport/


From ncoghlan at gmail.com  Wed Jun  9 15:08:43 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Wed, 09 Jun 2010 23:08:43 +1000
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <20100609144057.65e92226@pitrou.net>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>	<4C0D62B0.4020000@trueblade.com>
	<hum5u9$fl9$1@dough.gmane.org>	<4C0EAE83.30300@gmail.com>	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>	<87zkz4fw9p.fsf@uwakimon
	.sk.tsukuba.ac.jp>	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>	<4C0F761D.80600@gmail.com>	<AANLkTikcvc-1sVySF8uuNRWCJJEESnLAPiMcFUSVTFpt@mail.gmail.com>	<4C0F857F.4060509@gmail.com>
	<20100609144057.65e92226@pitrou.net>
Message-ID: <4C0F925B.2060607@gmail.com>

On 09/06/10 22:40, Antoine Pitrou wrote:
> Isn't that "impression" largely constructed, and propagated by a
> limited number of people who apparently don't like the very idea of a
> "batteries included" stdlib?  There has been an amount of anti-stdlib
> activism (including in this thread) that I find both antagonizing and
> unconstructive.  Outside of that vocal minority, there doesn't seem to
> be that much criticism against the stdlib.

The "where modules go to die" version of it is overstated, but the 
standard library definitely evolves more slowly than many third party 
packages.

To use numpy as an example (just going off their SF file dates):
   Dec 2007: 1.0.4
   May 2008: 1.1.0
   Sep 2008: 1.2.0
   Jul 2009: 1.3.0
   Apr 2010: 1.4.1

Faster cycle times allow developers to be much more responsive to 
feedback when changes don't turn out as well as was hoped. The 
comparatively slow evolution of the standard library is the grain of 
truth that underlies the exaggerated form. The trick is to explore 
avenues that make these faster cycle times available to those that want 
them, while still providing a stable foundation for those that need it.

It is exactly this situation that the Ubuntu release cycle is designed 
around: regular 6-monthly releases for most people, less frequent Long 
Term Support releases for those that need stability.

That said, the status quo, with ad hoc porting to PyPI by module 
maintainers that consider doing so to be worthwhile is certainly a 
viable option going forward as well.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------


From fdrake at acm.org  Wed Jun  9 15:10:23 2010
From: fdrake at acm.org (Fred Drake)
Date: Wed, 9 Jun 2010 09:10:23 -0400
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <4C0F761D.80600@gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com> 
	<20100607205207.5532b939@pitrou.net>
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com> 
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com> 
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com> 
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com> 
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com> 
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com> 
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org> 
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com> 
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com> 
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com> 
	<87zkz4fw9p.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com> 
	<4C0F761D.80600@gmail.com>
Message-ID: <AANLkTilQTjDCMEPKDo54ylKk7b4D19rxPysWvHR32Hfo@mail.gmail.com>

On Wed, Jun 9, 2010 at 7:08 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> Something we may want to seriously consider is maintaining parallel releases
> of packages indefinitely when the benefits are deemed to justify the
> additional overheads.

And this provides an opportunity to bring the workload back under
control, as well: if we continue the non-stdlib releases for these
packages, all we really need to do is remember not to add them to the
stdlib in the first place.

The one case where we clearly get some win by having the newer package
rolled in is distutils2, since that's really about a core service that
needs to be available for everyone.


  -Fred

-- 
Fred L. Drake, Jr.    <fdrake at gmail.com>
"Chaos is the score upon which reality is written." --Henry Miller


From fdrake at acm.org  Wed Jun  9 15:19:41 2010
From: fdrake at acm.org (Fred Drake)
Date: Wed, 9 Jun 2010 09:19:41 -0400
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <20100609144057.65e92226@pitrou.net>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com> 
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com> 
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com> 
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com> 
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com> 
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org> 
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com> 
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com> 
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com> 
	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com> 
	<4C0F761D.80600@gmail.com>
	<AANLkTikcvc-1sVySF8uuNRWCJJEESnLAPiMcFUSVTFpt@mail.gmail.com> 
	<4C0F857F.4060509@gmail.com> <20100609144057.65e92226@pitrou.net>
Message-ID: <AANLkTil6cipiRkx4bBsd08fWPjBZsH-mQb_WxqWr_K63@mail.gmail.com>

On Wed, Jun 9, 2010 at 8:40 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> Isn't that "impression" largely constructed, and propagated by a
> limited number of people who apparently don't like the very idea of a
> "batteries included" stdlib?

I don't think so.  I've no particular dislike of "batteries included"
per se.  What I don't like is dealing with packages that may or may not
be in the standard library (that affects the requirements for my
software), or that may have different names or update policies depending
on whether they're part of the standard library (because that affects my
code, always negatively).

>?There has been an amount of anti-stdlib
> activism (including in this thread) that I find both antagonizing and
> unconstructive. ?Outside of that vocal minority, there doesn't seem to
> be that much criticism against the stdlib.

Unconstructive in what way?

Writing cross-Python-version code that deals with the differences
between the stdlib and 3rd-party versions of packages is certainly
unconstructive, but that's an argument to avoid moving packages into the
standard library.

One thing that seems to be happening is that the so-called "vocal
minority" is growing.  I think that should be expected as the acceptance
of Python and applications built on it gain wider penetration.


  -Fred

-- 
Fred L. Drake, Jr.    <fdrake at gmail.com>
"Chaos is the score upon which reality is written." --Henry Miller


From solipsis at pitrou.net  Wed Jun  9 15:34:23 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 09 Jun 2010 15:34:23 +0200
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTil6cipiRkx4bBsd08fWPjBZsH-mQb_WxqWr_K63@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org>
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>
	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>
	<4C0F761D.80600@gmail.com>
	<AANLkTikcvc-1sVySF8uuNRWCJJEESnLAPiMcFUSVTFpt@mail.gmail.com>
	<4C0F857F.4060509@gmail.com> <20100609144057.65e92226@pitrou.net>
	<AANLkTil6cipiRkx4bBsd08fWPjBZsH-mQb_WxqWr_K63@mail.gmail.com>
Message-ID: <1276090463.3143.7.camel@localhost.localdomain>

Le mercredi 09 juin 2010 ? 09:19 -0400, Fred Drake a ?crit :
> 
> I don't think so.  I've no particular dislike of "batteries included"
> per se.  What I don't like is dealing with packages that may or may not
> be in the standard library (that affects the requirements for my
> software),

I don't understand what you mean. Are these packages in the stdlib or
aren't they? It can't be both.

> Writing cross-Python-version code that deals with the differences
> between the stdlib and 3rd-party versions of packages is certainly
> unconstructive, but that's an argument to avoid moving packages into the
> standard library.

I don't see how, really. You make it sound like you have to deal with
several versions of the stdlib, but not with several versions of
external packages. I wonder why: do you force your users to install
version X of module Y?

More generally, it seems that people are reproaching things to the
stdlib that are equally true for non-stdlib modules. This is quite
bewildering to me.




From p.f.moore at gmail.com  Wed Jun  9 15:59:55 2010
From: p.f.moore at gmail.com (Paul Moore)
Date: Wed, 9 Jun 2010 14:59:55 +0100
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <20100609144057.65e92226@pitrou.net>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org>
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>
	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>
	<4C0F761D.80600@gmail.com>
	<AANLkTikcvc-1sVySF8uuNRWCJJEESnLAPiMcFUSVTFpt@mail.gmail.com>
	<4C0F857F.4060509@gmail.com> <20100609144057.65e92226@pitrou.net>
Message-ID: <AANLkTilXJZg5WjnaYyZCQ9K6bhDVcnubuHDSmlv5OrU7@mail.gmail.com>

On 9 June 2010 13:40, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Wed, 09 Jun 2010 22:13:51 +1000
> Nick Coghlan <ncoghlan at gmail.com> wrote:
>>
>> Make no mistake, something like this would be quite a bit of work, but
>> it would be far from impossible and could go a long way towards fighting
>> the impression that the standard library is the place where modules go
>> to die.
>
> Isn't that "impression" largely constructed, and propagated by a
> limited number of people who apparently don't like the very idea of a
> "batteries included" stdlib? ?There has been an amount of anti-stdlib
> activism (including in this thread) that I find both antagonizing and
> unconstructive. ?Outside of that vocal minority, there doesn't seem to
> be that much criticism against the stdlib.

I agree - I think the "where modules go to die" argument is very
overstated (but sadly, just as a result of repetition, it seems to be
gaining traction :-(). Certainly, stdlib modules evolve at a slower
rate than 3rd party ones in many cases. But they do evolve, as Antoine
points out, and the slower evolution can just as easily be viewed as
stability.

What I don't understand is why the "activists" actually care if the
stdlib is big or small. Surely if you don't like the "fat stdlib",
just ignore it? Why inconvenience those of us who find it a benefit?

So can someone clarify (from the point of view of a "thin stdlib"
proponent) - what is the benefit to you, personally (i.e., ignoring
things like "frees up core developer time" - let them speak for
themselves if they feel that is a benefit), of actually removing items
from the stdlib, rather than just ignoring them?

Paul.


From ianb at colorstudy.com  Wed Jun  9 17:17:52 2010
From: ianb at colorstudy.com (Ian Bicking)
Date: Wed, 9 Jun 2010 10:17:52 -0500
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <20100609133843.14c6c19f@pitrou.net>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com> 
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com> 
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com> 
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com> 
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com> 
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com> 
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org> 
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com> 
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com> 
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com> 
	<87zkz4fw9p.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com> 
	<4C0F761D.80600@gmail.com>
	<AANLkTikcvc-1sVySF8uuNRWCJJEESnLAPiMcFUSVTFpt@mail.gmail.com> 
	<20100609133843.14c6c19f@pitrou.net>
Message-ID: <AANLkTimlGPTpD-13mvjWNY0mzMMvfpOLxu0JFkaJjjoH@mail.gmail.com>

On Wed, Jun 9, 2010 at 6:38 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:

> On Wed, 9 Jun 2010 07:34:56 -0400
> David Stanek <dstanek at dstanek.com> wrote:
> >
> > I had a very similar thought. Why not have all the real development of
> > those packages happen outside of the standard lib and just grab the
> > latest stable version when cutting a new version of Python.
>
> -1.  We have had too much trouble with externally-maintained modules
> such as elementtree and json.  The Python SVN (or hg) tree should be the
> primary place where development takes place.
>

New releases could also be cut from the Python tree.  I believe everyone
here agrees that entering the standard library in any form should imply a
greater sense of collective ownership of a package.

-- 
Ian Bicking  |  http://blog.ianbicking.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100609/69f3d20c/attachment.html>

From tjreedy at udel.edu  Wed Jun  9 20:59:45 2010
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 09 Jun 2010 14:59:45 -0400
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>	<AANLkTimDrVEKn6Z8tQ4LU5NiumJQX6wTPrHZMiZMRtWH@mail.gmail.com>	<20100607205207.5532b939@pitrou.net>	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com>	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com>	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>	<4C0D62B0.4020000@trueblade.com>
	<hum5u9$fl9$1@dough.gmane.org>	<4C0EAE83.30300@gmail.com>	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>	<87zkz4fw9p.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>
Message-ID: <huoob0$43d$1@dough.gmane.org>

On 6/9/2010 5:05 AM, Tarek Ziad? wrote:

> To try to find a solution to the problems described in this thread earlier.
>
> If I summarize so far the threads as I understood them, people don't
> want to rely
> on stdlib packages because their release cycles are too slow for what
> they want/need to do with it *today*. A package that enters the stdlib
> suffers from being slowed down. That's also a huge benefit for many reasons:
> stability, blessing, etc.
>
> The initial reason is that Ian doesn't want Pip do depend on distutils2 if
> it's in the stdlib, because he will have to cope with various versions
> of Python to make sure his users will have the same set of features I guess.
>
> So he needs to provide his own backports of any new distutils2 features.
>
> If we can find a way to facilitate this work, that would be great. IOW, if
> we can provide somehow a backport of these features so some projects
> can use it no matter what the python version is...
>
> And "not putting distutils2 in the stdlib" is not the solution because
> this is a problem
> for all packages in there.
>
> That's exactly what unitest currently do (but with a new name "unittest2")
> and as soon as Python 2.7 final will be out, unittest will have the
> same problem:
> it won't be able to backport new features anymore under the same namespace.

I do not see that changing the 'no new features in micro (bugfix) 
releases' policy would solve Ian's problem at all. Suppose unittest2 
were sufficiently done in September to go into 3.2 as unittest. Suppose 
the policy were changed and unittest2 were also backported into 
(3.1.final) and 2.7.1. That still would not help Ian with respect to 
2.6, 2.5, 2.4, and however far back he wants to support. Since whatever 
solution he uses for 2.6- should also work for 2.7 (and 3.1) what is the 
use?

Perhaps the reason for the policy needs to be restated: if new features 
are introduced in every x.y.z release, then 'Python x.y' has no 
particular meaning. This *was* the case somewhat during Python 1 and 
early Python 2 days, when the choice between moving from x.y.z to either 
x.y.(z+1) and x.(y+1) was somewhat arbitrary. (The move from '1.6' to 
'2.0' rather than '1.7' was for legal reasons only. Python2 really began 
with 2.2)

The precipitating event for the new policy was the introduction of 
bool() and prebound True and False late in 2.2 series. People who 
download the latest '2.2' release and who used the new feature found 
that their code would not run on the great majority of '2.2' 
installations. It caused enough problems for enough people that Guido 
decided that he should have waited for 2.3 for official introduction, 
and perhaps first released bool as a separate module for earlier use.

People who propose to change the policy back (are part way back) to what 
it used to be should at least be aware that it is a reversion and that 
there are reasons, and not just arbitrary whim or accidental 
happenstance, for the change.

Terry Jan Reedy




From tjreedy at udel.edu  Wed Jun  9 21:10:40 2010
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 09 Jun 2010 15:10:40 -0400
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTil6cipiRkx4bBsd08fWPjBZsH-mQb_WxqWr_K63@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org>
	<4C0EAE83.30300@gmail.com>	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>
	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>
	<4C0F761D.80600@gmail.com>	<AANLkTikcvc-1sVySF8uuNRWCJJEESnLAPiMcFUSVTFpt@mail.gmail.com>
	<4C0F857F.4060509@gmail.com> <20100609144057.65e92226@pitrou.net>
	<AANLkTil6cipiRkx4bBsd08fWPjBZsH-mQb_WxqWr_K63@mail.gmail.com>
Message-ID: <huoove$7vo$1@dough.gmane.org>

On 6/9/2010 9:19 AM, Fred Drake wrote:

> One thing that seems to be happening is that the so-called "vocal
> minority" is growing.  I think that should be expected as the acceptance
> of Python and applications built on it gain wider penetration.

Yes, as N grows, any constant fraction f times N will grow. In fact, f*N 
can grow even if f shrinks. but just more slowly than N grows.

Terry Jan Reedy



From fdrake at acm.org  Wed Jun  9 21:20:23 2010
From: fdrake at acm.org (Fred Drake)
Date: Wed, 9 Jun 2010 15:20:23 -0400
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <huoove$7vo$1@dough.gmane.org>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com> 
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com> 
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com> 
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com> 
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org> 
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com> 
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com> 
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com> 
	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com> 
	<4C0F761D.80600@gmail.com>
	<AANLkTikcvc-1sVySF8uuNRWCJJEESnLAPiMcFUSVTFpt@mail.gmail.com> 
	<4C0F857F.4060509@gmail.com> <20100609144057.65e92226@pitrou.net> 
	<AANLkTil6cipiRkx4bBsd08fWPjBZsH-mQb_WxqWr_K63@mail.gmail.com> 
	<huoove$7vo$1@dough.gmane.org>
Message-ID: <AANLkTilBCEXJXDlgEVV66T87mjz1ubarx5HmkR3M0a5K@mail.gmail.com>

On Wed, Jun 9, 2010 at 3:10 PM, Terry Reedy <tjreedy at udel.edu> wrote:
> Yes, as N grows, any constant fraction f times N will grow. In fact, f*N can
> grow even if f shrinks. but just more slowly than N grows.

True.  Since there's no empirical measure of either N (number of
Python users) or VM (the Vocal Minority), it's hard to tell if f is a
constant fraction or something more (or less) interesting.

I also suspect that VM is less likely to be readily measurable than N.
 For those in the VM, the severity of their objections only increases
over time, until they exit both N and VM.


  -Fred

-- 
Fred L. Drake, Jr.    <fdrake at gmail.com>
"Chaos is the score upon which reality is written." --Henry Miller


From brett at python.org  Wed Jun  9 21:52:55 2010
From: brett at python.org (Brett Cannon)
Date: Wed, 9 Jun 2010 12:52:55 -0700
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTimlGPTpD-13mvjWNY0mzMMvfpOLxu0JFkaJjjoH@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com> 
	<AANLkTinTM9KLR2yyXlNtI01JER-haiwdcIig4DwIqyc8@mail.gmail.com> 
	<AANLkTimO5N5xyyXXr_xmvI-IFJx2Tmcwg2NScuoz5l8K@mail.gmail.com> 
	<AANLkTinWEflE9EL-CYD41h2yfPT4ocndTEyeSm3fGvuD@mail.gmail.com> 
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com> 
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com> 
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com> 
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org> 
	<4C0EAE83.30300@gmail.com>
	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com> 
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com> 
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com> 
	<87zkz4fw9p.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com> 
	<4C0F761D.80600@gmail.com>
	<AANLkTikcvc-1sVySF8uuNRWCJJEESnLAPiMcFUSVTFpt@mail.gmail.com> 
	<20100609133843.14c6c19f@pitrou.net>
	<AANLkTimlGPTpD-13mvjWNY0mzMMvfpOLxu0JFkaJjjoH@mail.gmail.com>
Message-ID: <AANLkTikevcgZcAEKADMM4KbS0utmV7XUA1VJcPt0pgJz@mail.gmail.com>

On Wed, Jun 9, 2010 at 08:17, Ian Bicking <ianb at colorstudy.com> wrote:
> On Wed, Jun 9, 2010 at 6:38 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>>
>> On Wed, 9 Jun 2010 07:34:56 -0400
>> David Stanek <dstanek at dstanek.com> wrote:
>> >
>> > I had a very similar thought. Why not have all the real development of
>> > those packages happen outside of the standard lib and just grab the
>> > latest stable version when cutting a new version of Python.
>>
>> -1. ?We have had too much trouble with externally-maintained modules
>> such as elementtree and json. ?The Python SVN (or hg) tree should be the
>> primary place where development takes place.
>
> New releases could also be cut from the Python tree.? I believe everyone
> here agrees that entering the standard library in any form should imply a
> greater sense of collective ownership of a package.

But this makes the assumption that core developers are going to choose
to develop modules such that they can be released before the next
release of Python goes out. There seems to be a lot of extrapolation
from the fact that Michael takes the time to do unittest2 (which, now
that I think about it, should probably have been named
unittest_external or something as it really isn't a sequel to
unittest, just an external release) and that Tarek is doing the
initial development of distutils2 externally. Out of all the core
developers that is not that many. Sure you could maybe toss in
ElementTree, but that probably is it (e.g. simplejson only gets new
releases when Bob fixes stuff for a new Python release or simply makes
an external release of stdlib fixes).

For me, importlib will never work in this environment. When new
features in modules or the language come in I try to update code to
use it when I can. That means that importlib is not going to have a
stable release cycle outside of Python's. Nor am I going to be willing
to change that practice as that makes development harder for me -- I
don't want to have to check if some new feature has been made
available externally -- and honestly more boring -- I like using the
new features of the language as that helps make core development fun.

The only way I see this working is for individual core developers to
decide they want to keep a module dependent only on the last minor
release of Python (or older). At that point you can try to convince
the developers to use a common package name (stdlib, py, stdlib_ext,
etc.) and then use namespace packages or pkgutil.extend_path to pull
them in under the common package name to signify that they are "early"
releases of the modules from the stdlib. That gives you an easier way
to gauge usage as you can look at use of the package as a whole to see
what kind of community pickup there is.

If you can do that *and* show that there was a clear benefit to the
community then you might have a chance to get more developers to
participate in this development scheme. But in terms of a general
python-dev policy, it simply won't happen as it is just too much extra
work to force upon everyone (Nick seems to be the only one I can think
of still throwing ideas out there, but even with that I don't know how
much work he wants to put in).


From ncoghlan at gmail.com  Thu Jun 10 00:31:23 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 10 Jun 2010 08:31:23 +1000
Subject: [Python-ideas] Moving development out of the standard library
In-Reply-To: <AANLkTikevcgZcAEKADMM4KbS0utmV7XUA1VJcPt0pgJz@mail.gmail.com>
References: <AANLkTimv2x6W2l8bcjjafIM1hBdxIvfPwWxAAQ7pnSM_@mail.gmail.com>
	<AANLkTikm-KJNt37PuJFNJiNxnIGw0XlsofCnB2ydZGBn@mail.gmail.com>
	<AANLkTinVyFfkTPFSkeiNMHWD4OM-jQKtXGF6b4J4G8Ri@mail.gmail.com>
	<AANLkTilpOhcCbDaZ4p4OsZNmrFDyNhyZlbRcbfD5w20a@mail.gmail.com>
	<4C0D62B0.4020000@trueblade.com> <hum5u9$fl9$1@dough.gmane.org>
	<4C0EAE83.30300@gmail.com>	<AANLkTinTguZqJJbcL11z69PCseIcEIslnjniCsLuqcUg@mail.gmail.com>
	<AANLkTinjA4zc33_Pccv2xvy1su-L9Nqhsi6YQzsbK0Im@mail.gmail.com>
	<878w6pgi8m.fsf@uwakimon.sk.tsukuba.ac.jp>	<AANLkTikOPJ-SqWyS1cNjmBXT12lDb0gDOarMnoW4IyjJ@mail.gmail.com>
	<87zkz4fw9p.fsf@uwakimon.sk.tsukuba.ac.jp>	<AANLkTikg90bU9vF1GBuv2JR-xm2InnN_feuC85RPtvTv@mail.gmail.com>
	<4C0F761D.80600@gmail.com>	<AANLkTikcvc-1sVySF8uuNRWCJJEESnLAPiMcFUSVTFpt@mail.gmail.com>
	<20100609133843.14c6c19f@pitrou.net>	<AANLkTimlGPTpD-13mvjWNY0mzMMvfpOLxu0JFkaJjjoH@mail.gmail.com>
	<AANLkTikevcgZcAEKADMM4KbS0utmV7XUA1VJcPt0pgJz@mail.gmail.com>
Message-ID: <4C10163B.9070406@gmail.com>

On 10/06/10 05:52, Brett Cannon wrote:
> But in terms of a general
> python-dev policy, it simply won't happen as it is just too much extra
> work to force upon everyone (Nick seems to be the only one I can think
> of still throwing ideas out there, but even with that I don't know how
> much work he wants to put in).

Pure speculation as to ways this could be done such that it produces a 
net benefit for the Python ecosystem as a whole.

I have close to zero interest in actually doing the work involved 
myself, since I'm one of those that just uses the standard library for 
stuff and doesn't worry if it isn't the latest and greatest.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------


From junkmute at hotmail.com  Fri Jun 11 23:06:42 2010
From: junkmute at hotmail.com (Fake Name)
Date: Fri, 11 Jun 2010 17:06:42 -0400
Subject: [Python-ideas] Globalize lonely augmented assignment
Message-ID: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>


Mark Dickinson suggested discussion take place here, rather than at bugs.python

http://bugs.python.org/issue8977

To note, his counter example would currently raise an UnboundLocalError
 		 	   		  
_________________________________________________________________
Jeux Messenger : mettez vos amis au d?fi!
http://go.microsoft.com/?linkid=9734397
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100611/640f1f21/attachment.html>

From merwok at netwok.org  Fri Jun 11 23:37:45 2010
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Fri, 11 Jun 2010 23:37:45 +0200
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
Message-ID: <4C12ACA9.6010205@netwok.org>

Hello list

I?m really uncomfortable with your proposal too. FTR, let me copy it:
You want this code to work:


  A = [1, 2, 3]

  def f(x):
       A += [x]

  f(4) # appends 4 to A


It fails today to be consistent with this equivalent code:


  def f(x):
      B = A + [x]
      A = B


I can?t see one way working but not the other.


> To note, his counter example would currently raise an
> UnboundLocalError

It?s not an alternate example that would work today, but a consequence
of your proposal that we definitely don?t want (mutating immutable objects).


To add a third reason to reject your patch:

  f = A.append  # :)


Regards



From ncoghlan at gmail.com  Sat Jun 12 02:39:20 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 12 Jun 2010 10:39:20 +1000
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
Message-ID: <4C12D738.4070906@gmail.com>

On 12/06/10 07:06, Fake Name wrote:
> Mark Dickinson suggested discussion take place here, rather than at
> bugs.python
>
> http://bugs.python.org/issue8977
>
> To note, his counter example would currently raise an UnboundLocalError

For those not checking the issue discussion, I'll note that Guido's 
opinion is that this is a borderline case. Either behaviour 
(mutate/rebind the global or raise UnboundLocalError) is going to be 
confusing in some cases and intuitive in others.

To quote Mark:
""" I guess there's a mismatch either way around:  currently,
"A += [4]" and "A.append(4)" behave differently for (e.g.,) a list A. 
With the proposed change, "n += 3" and "n = n + 3" behave differently 
for a integer n."""

I agree with both of those points.

However, I believe this is a case where the cognitive cost of changing 
the status quo isn't worthwhile. New Python users can be taught very 
quickly that assigning to a variable from a different scope requires a 
global or nonlocal declaration.

While existing users could be taught easily enough that that was no 
longer necessary for augmented assignment, how much real world code 
would actually benefit? (now, Paul Graham's accumulator hobby horse 
doesn't count as real world code)

Call it a -0. I'm not implacably opposed, I just don't see it as a good 
use of developer (and documentation author!) time.

Cheers,
Nick.

P.S. Any such change would have to wait until after the moratorium anyway.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------


From junkmute at hotmail.com  Sat Jun 12 03:18:48 2010
From: junkmute at hotmail.com (Demur Rumed)
Date: Fri, 11 Jun 2010 21:18:48 -0400
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <4C12D738.4070906@gmail.com>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>,
	<4C12D738.4070906@gmail.com>
Message-ID: <SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>


> However, I believe this is a case where the cognitive cost of changing 
> the status quo isn't worthwhile. New Python users can be taught very 
> quickly that assigning to a variable from a different scope requires a 
> global or nonlocal declaration.
 >
> While existing users could be taught easily enough that that was no 
> longer necessary for augmented assignment, how much real world code 
> would actually benefit? (now, Paul Graham's accumulator hobby horse 
> doesn't count as real world code)

I believe it would be simpler to learn that variables are _only_ local if bound with the assignment operator. I view the augmented assignment operators as different beasts. This patch doesn't quite meet its goals in that respect. I'd like to further the locality of a variable to "A variable is local if, and only if, it is first referenced as the left hand side of an assignment on all code paths." This patch fails to set that rule

For example,

def f(x):
    a.append(x)
    if len(a)>5:a=[5]

If a is bound as a local, this throws an UnboundLocalError. Why then is it not set to try the global namespace, that place we cannot be so certain of the exception in?

It comes down to the view of UnboundLocalError as a feature or a bug
 		 	   		  
_________________________________________________________________
Occupez vos temps morts avec les jeux Messenger!
http://go.microsoft.com/?linkid=9734395
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100611/7728193c/attachment.html>

From greg.ewing at canterbury.ac.nz  Sat Jun 12 03:59:02 2010
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 12 Jun 2010 13:59:02 +1200
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
	<4C12D738.4070906@gmail.com>
	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
Message-ID: <4C12E9E6.5020606@canterbury.ac.nz>

Demur Rumed wrote:

> I believe it would be simpler to learn that variables are _only_ local 
> if bound with the assignment operator.

I'd be happy with that, but

> I'd like to further the locality of a variable to "A 
> variable is local if, and only if, it is first referenced as the left 
> hand side of an assignment on all code paths."

is a *much* more complicated rule both to learn and comply
with. What happens if this is true for some code paths but
not others? That would probably indicate a mistake, and
having the scope default to global in that case would be
quite dangerous.

For the matter at hand, a compromise might be to raise a
compile-time error if a variable is referenced by an
augmented assignment without either a global declaration
or a plain assignment to the same variable somewhere in
the function.

That would help newbies by explaining more clearly what
the problem is, while not affecting any existing correct
code.

-- 
Greg


From tjreedy at udel.edu  Sat Jun 12 09:10:11 2010
From: tjreedy at udel.edu (Terry Reedy)
Date: Sat, 12 Jun 2010 03:10:11 -0400
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>,
	<4C12D738.4070906@gmail.com>
	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
Message-ID: <huvbsj$vng$1@dough.gmane.org>

On 6/11/2010 9:18 PM, Demur Rumed wrote:

> I believe it would be simpler to learn that variables are _only_ local
> if bound with the assignment operator.

Assignment is NOT NOT NOT an operator. It is a type of statement.

> I view the augmented assignment operators as different beasts.

Your view is one that leads to buggy code. It is wrong in that respect.
An augmented assignment STATEMEMT is both a STATEMENT, not an operator, 
and an ASSIGNMENT statement. Misunderstanding this leads to buggy code 
and posts on python list "why doesnt my code not work righ?"

-1+ on the proposal as it will lead to confusion and bugs.

Terry Jan Reedy



From ncoghlan at gmail.com  Sat Jun 12 10:12:52 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 12 Jun 2010 18:12:52 +1000
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>,
	<4C12D738.4070906@gmail.com>
	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
Message-ID: <4C134184.8060104@gmail.com>

On 12/06/10 11:18, Demur Rumed wrote:
> I believe it would be simpler to learn that variables are _only_ local
> if bound with the assignment operator. I view the augmented assignment
> operators as different beasts. This patch doesn't quite meet its goals
> in that respect. I'd like to further the locality of a variable to "A
> variable is local if, and only if, it is first referenced as the left
> hand side of an assignment on all code paths." This patch fails to set
> that rule

The only thing even *remotely* on the table here is to take augmented 
assignment out of the list of statements that will create a new local 
variable.

For 3.x, that list is currently:
   - assignment (i.e. '=')
   - augmented assignment (i.e. '+=', '*=', etc)
   - function/generator definitions (i.e. def)*
   - class definitions
   - for loops
   - try/except statements
   - import statements
   - with statements

*Unlike other statements in this list, def statements can affect two 
different scopes. The defined name becomes a local in the scope 
containing the statement, while the names of any declared parameters 
become local inside the statement's own scope.

The compiler identifies local variables via static analysis of the 
function as a whole to see if they are used as name binding targets in 
any of the above statements *anywhere* in the function. We are *not* 
going to change that, not just because doing anything else would be far 
to error-prone, but also because any other interpretation would make 
compilation far too difficult.

For example, consider the following example:

   def f(x):
     if randint(2):
       a = [5]
     return a[x] # Emit code for global or local lookup?

The compiler has to choose to emit a global or local lookup opcode at 
compile time - it doesn't have the luxury of knowing whether or not the 
name binding statement will actually be executed at runtime, so it 
ignores any conditional execution when deciding whether or not a name is 
bound locally. UnboundLocalError then covers all cases where you attempt 
to use a local variable name before you have bound it to something.

Now, as to the reason we can even consider taking augmented assignment 
out of the list above: of the current name binding statements, it is the 
*only* one which requires that a referenced name already be bound using 
one of the *other* statements in the list.

If augmented assignment is currently used in a function *without* 
raising UnboundLocalError at runtime, then that can only be because the 
target has been bound by other means, either in the current scope, or 
else in a different scope and then explicitly declared as coming from 
another scope via a global or nonlocal statement.

So, without breaking existing code (that wasn't already broken), we 
could change the default scope for augmented assignment from "always use 
the local scope" to be:
- if a name is declared local by other means, treat it is local
- it the name exists in a surrounding scope, treat it as nonlocal
- otherwise treat it as global

That would almost certainly be more useful than the current behaviour. 
The question is whether it is *sufficiently* useful to justify the 
effort in updating the documentation and implementing this not just for 
CPython, but for other VMs such as Jython, IronPython and PyPy.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------


From dangyogi at gmail.com  Sat Jun 12 17:43:25 2010
From: dangyogi at gmail.com (Bruce Frederiksen)
Date: Sat, 12 Jun 2010 11:43:25 -0400
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <huvbsj$vng$1@dough.gmane.org>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
	<4C12D738.4070906@gmail.com>
	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
	<huvbsj$vng$1@dough.gmane.org>
Message-ID: <AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>

On Sat, Jun 12, 2010 at 3:10 AM, Terry Reedy <tjreedy at udel.edu> wrote:
>
> On 6/11/2010 9:18 PM, Demur Rumed wrote:
>
>> I view the augmented assignment operators as different beasts.
>
> Your view is one that leads to buggy code. It is wrong in that respect.
> An augmented assignment STATEMEMT is both a STATEMENT, not an operator, and an ASSIGNMENT statement. Misunderstanding this leads to buggy code and posts on python list "why doesnt my code not work righ?"

I am curious about these buggy code examples.? Do you have any?

The standard assignment statement _binds_ the local variable.? But the
augmented assignment only _rebinds_ it.? The augmented assignment does
not give the variable a value if it doesn't already have one.

I think that we all agree that if the function has an assignment to
the variable some place else, the variable is a local variable.

So we are considering the case where no assignment to the variable
exists within the function, but there is an augmented assignment.? But
in this case, if we say that the variable is a local variable, how did
this local variable get a value that the augmented assignment can then
use?

The only way that I can think of is:

def foo():
??? def bar():
??????? nonlocal A
??????? A = []
??? bar()
??? A += [2]

What am I missing?

-Bruce


From g.brandl at gmx.net  Sat Jun 12 20:28:03 2010
From: g.brandl at gmx.net (Georg Brandl)
Date: Sat, 12 Jun 2010 20:28:03 +0200
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>	<4C12D738.4070906@gmail.com>	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>	<huvbsj$vng$1@dough.gmane.org>
	<AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>
Message-ID: <hv0jm3$hrq$1@dough.gmane.org>

Am 12.06.2010 17:43, schrieb Bruce Frederiksen:

> So we are considering the case where no assignment to the variable
> exists within the function, but there is an augmented assignment.  But
> in this case, if we say that the variable is a local variable, how did
> this local variable get a value that the augmented assignment can then
> use?
> 
> The only way that I can think of is:
> 
> def foo():
>     def bar():
>         nonlocal A
>         A = []
>     bar()
>     A += [2]

(I assume you intended a global A somewhere outside of foo().)
With the proposed semantics, this would actually be a compilation error,
since there is no nonlocal binding of A that the "nonlocal" statement
could bring into scope.

> What am I missing?

Currently, augmented assignment has a very straightforward translation to
plain assignment::

   a += b   is equivalent to
   a = a.__iadd__(b)

(Not just ``a.__iadd__(b)``, as some people think at first.  That's a
problem, see below.)

With the proposal, it would be much more complicated and dependent on
the context: "... it's the same as <code>, but if the name would only
be made a local by augmented assignment statements, it's automatically
made nonlocal if there's a matching non-local binding, or global
otherwise."  Pretty scary.  And while I think about it, it's pretty
implicit as well, since it basically makes a global or nonlocal binding
(this would be different if the translation didn't include the actual
assignment).

Georg

-- 
Thus spake the Lord: Thou shalt indent with four spaces. No more, no less.
Four shall be the number of spaces thou shalt indent, and the number of thy
indenting shall be four. Eight shalt thou not indent, nor either indent thou
two, excepting that thou then proceed to four. Tabs are right out.



From dangyogi at gmail.com  Sat Jun 12 21:40:16 2010
From: dangyogi at gmail.com (Bruce Frederiksen)
Date: Sat, 12 Jun 2010 15:40:16 -0400
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <hv0jm3$hrq$1@dough.gmane.org>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
	<4C12D738.4070906@gmail.com>
	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
	<huvbsj$vng$1@dough.gmane.org>
	<AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>
	<hv0jm3$hrq$1@dough.gmane.org>
Message-ID: <AANLkTiljHHBwNejNSCJIeS_bbPcIA6f75brvnt87XyLu@mail.gmail.com>

On Sat, Jun 12, 2010 at 2:28 PM, Georg Brandl <g.brandl at gmx.net> wrote:
> Am 12.06.2010 17:43, schrieb Bruce Frederiksen:
>
>> So we are considering the case where no assignment to the variable
>> exists within the function, but there is an augmented assignment. ?But
>> in this case, if we say that the variable is a local variable, how did
>> this local variable get a value that the augmented assignment can then
>> use?
>>
>> The only way that I can think of is:
>>
>> def foo():
>> ? ? def bar():
>> ? ? ? ? nonlocal A
>> ? ? ? ? A = []
>> ? ? bar()
>> ? ? A += [2]
>
> (I assume you intended a global A somewhere outside of foo().)
> With the proposed semantics, this would actually be a compilation error,
> since there is no nonlocal binding of A that the "nonlocal" statement
> could bring into scope.

No, I didn't intend a global A, and yes, this would be a compilation
error under the proposed semantics.

What I meant was that the current semantics are broken.  I think that
it could be argued that it doesn't make sense to have augmented
assignment cause a variable to be made local; because, outside of my
example above, execution of the augmented assignment would _always_
produce an UnboundLocalError.  And that's because the augmented
assignment also refers to the variable _before_ setting it.

So, to be a legal program (ie, one that doesn't raise
UnboundLocalError), there _must_ be some other assignment to the
variable in the function.  And if so, this other assignment would
cause the variable to be made local, and the augmented assignment is
immaterial to this decision.

So my challenge (to see if I'm overlooking something) is to show me a
current Python program that only uses augmented assignment to cause a
variable to be made local, but does not raise UnboundLocalError when
the augmented assignment is run.  If these examples don't exist, it
sounds like this is a bug in the current language design.

    def foo():
        # this causes 'a' to be made local; both current and proposed.
        a = 5

        # so 'a' here is local; both current and proposed.
        a += 7

    def foo():
        # without an assignment to 'a', this is currently always an error!
        # it can only make sense if 'a' is global!
        a += 7

If you can't do that, then this is a bug!

>
>> What am I missing?
>
> Currently, augmented assignment has a very straightforward translation to
> plain assignment::
>
> ? a += b ? is equivalent to
> ? a = a.__iadd__(b)
>
> (Not just ``a.__iadd__(b)``, as some people think at first. ?That's a
> problem, see below.)

But this can't be treated as a simple macro expansion inside the
compiler, as that would cause the lhs to be evaluated twice.  For
example in:

    a[fn_with_side_effects()] += b

>
> With the proposal, it would be much more complicated and dependent on
> the context: "... it's the same as <code>, but if the name would only
> be made a local by augmented assignment statements, it's automatically
> made nonlocal if there's a matching non-local binding, or global
> otherwise." ?Pretty scary.

I agree completely that anything more complicated than striking the
augmented assignment from the list of statements that cause a variable
to be made local is scary.  It should not depend on what global
bindings are present at this or that time, or whether some other local
assignment has or has not been executed prior to the augmented
assignment.  The augmented assignment statement should simply not be a
factor in the decision to make a variable local, and having it be a
factor is a bug in the current language design as it can never lead to
legal programs (at least before the addition of the "nonlocal"
statement), only misbehaving ones.

-Bruce


From g.brandl at gmx.net  Sat Jun 12 23:08:50 2010
From: g.brandl at gmx.net (Georg Brandl)
Date: Sat, 12 Jun 2010 23:08:50 +0200
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <AANLkTiljHHBwNejNSCJIeS_bbPcIA6f75brvnt87XyLu@mail.gmail.com>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>	<4C12D738.4070906@gmail.com>	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>	<huvbsj$vng$1@dough.gmane.org>	<AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>	<hv0jm3$hrq$1@dough.gmane.org>
	<AANLkTiljHHBwNejNSCJIeS_bbPcIA6f75brvnt87XyLu@mail.gmail.com>
Message-ID: <hv0t3j$cnl$1@dough.gmane.org>

Am 12.06.2010 21:40, schrieb Bruce Frederiksen:
> On Sat, Jun 12, 2010 at 2:28 PM, Georg Brandl <g.brandl at gmx.net> wrote:
>> Am 12.06.2010 17:43, schrieb Bruce Frederiksen:
>>
>>> So we are considering the case where no assignment to the variable
>>> exists within the function, but there is an augmented assignment.  But
>>> in this case, if we say that the variable is a local variable, how did
>>> this local variable get a value that the augmented assignment can then
>>> use?
>>>
>>> The only way that I can think of is:
>>>
>>> def foo():
>>>     def bar():
>>>         nonlocal A
>>>         A = []
>>>     bar()
>>>     A += [2]
>>
>> (I assume you intended a global A somewhere outside of foo().)
>> With the proposed semantics, this would actually be a compilation error,
>> since there is no nonlocal binding of A that the "nonlocal" statement
>> could bring into scope.
> 
> No, I didn't intend a global A, and yes, this would be a compilation
> error under the proposed semantics.
> 
> What I meant was that the current semantics are broken.  I think that
> it could be argued that it doesn't make sense to have augmented
> assignment cause a variable to be made local; because, outside of my
> example above, execution of the augmented assignment would _always_
> produce an UnboundLocalError.  And that's because the augmented
> assignment also refers to the variable _before_ setting it.
>
> So, to be a legal program (ie, one that doesn't raise
> UnboundLocalError), there _must_ be some other assignment to the
> variable in the function.  And if so, this other assignment would
> cause the variable to be made local, and the augmented assignment is
> immaterial to this decision.

Yes, but why does that make the current semantics broken?  Is it broken
if you do this::

   def foo():
       print a
       a = 1

I would rather say it's a programming error.  (It could be argued that
the compiler should warn about it; I would be in favor of that, if it
weren't impossible to implement correctly for all cases.)

> So my challenge (to see if I'm overlooking something) is to show me a
> current Python program that only uses augmented assignment to cause a
> variable to be made local, but does not raise UnboundLocalError when
> the augmented assignment is run.  If these examples don't exist, it
> sounds like this is a bug in the current language design.
> 
>     def foo():
>         # this causes 'a' to be made local; both current and proposed.
>         a = 5
> 
>         # so 'a' here is local; both current and proposed.
>         a += 7
> 
>     def foo():
>         # without an assignment to 'a', this is currently always an error!
>         # it can only make sense if 'a' is global!
>         a += 7
> 
> If you can't do that, then this is a bug!

Yes, it is a bug -- a bug in your code.  I don't understand your reasoning
here.  Just because you can't use a construct under some circumstances, its
semantics are buggy?  Does division have a bug because you can't divide by
zero?

>>> What am I missing?
>>
>> Currently, augmented assignment has a very straightforward translation to
>> plain assignment::
>>
>>   a += b   is equivalent to
>>   a = a.__iadd__(b)
>>
>> (Not just ``a.__iadd__(b)``, as some people think at first.  That's a
>> problem, see below.)
> 
> But this can't be treated as a simple macro expansion inside the
> compiler, as that would cause the lhs to be evaluated twice.  For
> example in:
> 
>     a[fn_with_side_effects()] += b

That's true, but it's also unimportant and a special case.

>> With the proposal, it would be much more complicated and dependent on
>> the context: "... it's the same as <code>, but if the name would only
>> be made a local by augmented assignment statements, it's automatically
>> made nonlocal if there's a matching non-local binding, or global
>> otherwise."  Pretty scary.
> 
> I agree completely that anything more complicated than striking the
> augmented assignment from the list of statements that cause a variable
> to be made local is scary.  It should not depend on what global
> bindings are present at this or that time, or whether some other local
> assignment has or has not been executed prior to the augmented
> assignment.

But by "striking" the augassign from that list, you *are* making it that
complicated.  As I explained, augmented assignment *does* contain an
assignment, so some namespace must be determined where to assign.  That
gives you the complication, because assignment is always local in Python,
except if you explicitly put a global or nonlocal statement.

> The augmented assignment statement should simply not be a
> factor in the decision to make a variable local, and having it be a
> factor is a bug in the current language design as it can never lead to
> legal programs (at least before the addition of the "nonlocal"
> statement), only misbehaving ones.

Again, I can see no bug in language design just because you can't use every
construct in every context.  Nobody forces you to write functions with only
an augassign in them.

Georg

-- 
Thus spake the Lord: Thou shalt indent with four spaces. No more, no less.
Four shall be the number of spaces thou shalt indent, and the number of thy
indenting shall be four. Eight shalt thou not indent, nor either indent thou
two, excepting that thou then proceed to four. Tabs are right out.



From greg.ewing at canterbury.ac.nz  Sun Jun 13 02:07:25 2010
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sun, 13 Jun 2010 12:07:25 +1200
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <huvbsj$vng$1@dough.gmane.org>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
	<4C12D738.4070906@gmail.com>
	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
	<huvbsj$vng$1@dough.gmane.org>
Message-ID: <4C14213D.4040701@canterbury.ac.nz>

Terry Reedy wrote:

> An augmented assignment STATEMEMT is both a STATEMENT, not an operator, 
> and an ASSIGNMENT statement.

This is just at statement of the way things are, not an
argument for keeping them that way.

> -1+ on the proposal as it will lead to confusion and bugs.

I don't see how it would lead to substantially greater
chance of bugs than there currently is for other cases
where you reference an intended local without assigning
to it. If you do that in any other way, it gets looked
up as a global, which almost certainly results in a
NameError. If the name happens to clash with an existing
global, then something more obscure happens, but that's
relatively rare.

It seems to me that getting an error message about a
global name in this case would be less confusing. The
thought process would then be "Global? Eh? But it's
supposed to be local! Oh, I see, I haven't initialised
it, how silly of me."

Whereas currently it's "Local? It's supposed to be
global, why the heck does the stupid interpreter think
it's local?" <Stares at code for 3.7 hours and then
posts a message on c.l.py.>

-- 
Greg


From greg.ewing at canterbury.ac.nz  Sun Jun 13 02:19:38 2010
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sun, 13 Jun 2010 12:19:38 +1200
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <hv0jm3$hrq$1@dough.gmane.org>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
	<4C12D738.4070906@gmail.com>
	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
	<huvbsj$vng$1@dough.gmane.org>
	<AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>
	<hv0jm3$hrq$1@dough.gmane.org>
Message-ID: <4C14241A.709@canterbury.ac.nz>

Georg Brandl wrote:

>    a += b   is equivalent to
>    a = a.__iadd__(b)

It's not quite the same as that, because if a stands for an
indexed expression, the index is only evaluated once.

> With the proposal, it would be much more complicated and dependent on
> the context: "... it's the same as <code>, but if the name would only
> be made a local by augmented assignment statements, it's automatically
> made nonlocal if there's a matching non-local binding, or global
> otherwise."

It doesn't have to be as complicated as that -- you only need
to add something like "except that if the LHS is a bare name,
it does not imply that the name is local." Any context-dependent
effects then follow from the existing scope rules.

-- 
Greg



From tjreedy at udel.edu  Sun Jun 13 02:25:37 2010
From: tjreedy at udel.edu (Terry Reedy)
Date: Sat, 12 Jun 2010 20:25:37 -0400
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>	<4C12D738.4070906@gmail.com>	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>	<huvbsj$vng$1@dough.gmane.org>
	<AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>
Message-ID: <hv18i3$8qm$1@dough.gmane.org>

On 6/12/2010 11:43 AM, Bruce Frederiksen wrote:
> On Sat, Jun 12, 2010 at 3:10 AM, Terry Reedy<tjreedy at udel.edu>  wrote:
>>
>> On 6/11/2010 9:18 PM, Demur Rumed wrote:
>>
>>> I view the augmented assignment operators as different beasts.
>>
>> Your view is one that leads to buggy code. It is wrong in that respect.
>> An augmented assignment STATEMEMT is both a STATEMENT, not an operator, and an ASSIGNMENT statement. Misunderstanding this leads to buggy code and posts on python list "why doesnt my code not work righ?"
>
> I am curious about these buggy code examples.  Do you have any?

Yes. Think a bit, or search the python-list archives, where I have been 
answering newbie questions and confusions for a decade.

> The standard assignment statement _binds_ the local variable.  But the
> augmented assignment only _rebinds_ it.  The augmented assignment does
> not give the variable a value if it doesn't already have one.
>
> I think that we all agree that if the function has an assignment to
> the variable some place else, the variable is a local variable.

And this proposal would break that simple rule.
It also would break the simple rule than one can only rebind a global or 
nonlocal name if one explicit declares them. Namespaces are complex 
enough that any simplicity is a virtue.

> So we are considering the case where no assignment to the variable
> exists within the function, but there is an augmented assignment.

This *is* an assignment, documented in 6.2.1. Augmented assignment 
statements as a subsection of 6.2. Assignment statements. There is says 
"the assignment done by augmented assignment statements is handled the 
same way as normal assignments." This proposal would add a fiddly exception.

If there is no previous assignment, it is a bug and should be flagged.

To expand on what Georg said,

x op= y

is equivalent to something like

<ref> '=' target('x')
<ref> = *<ref> iop y

where <ref> and target('x') are interpreter-level reference constructs,
'=' is internal, interpreter-level binding, and *<ref> is the Pythoh 
object <ref> references.

This proposal would break that equivalence.

It would also make the meaning of x op= y depend on what other 
statememts (other than the exceptional global/nonlocal declarations) are 
present in the same block.

Consider:
x = 1
...
def f(y,z)
   x = y+z
   ...
   x /= 2.0

runs fine. Now during editing/refactoring, the x=y+z line is removed or 
x is change to something else. The program has a bug and an error should 
be raised. This proposal would mask the bug and have the x /= 2.0 
statement change its meaning.

Now, one can tell what global vars a function rebinds by looking for a 
global statement, which sensible programmers always put at the top of 
the function body after any doc string. (I think that someone argued 
that "well, globals can be mutated without declaration" as if 
compounding a somewhat bad thing is a good thing. As a matter style, I 
think declaring a global declaration for non-args mutated by a function, 
when possible, would also be a good thing. Or the doc string should 
mention such.)

In summary, this proposal creates several problems, all for the sake of 
a programmer who does not want to type 'global x'.

-10.



From greg.ewing at canterbury.ac.nz  Sun Jun 13 02:49:46 2010
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sun, 13 Jun 2010 12:49:46 +1200
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <hv0t3j$cnl$1@dough.gmane.org>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
	<4C12D738.4070906@gmail.com>
	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
	<huvbsj$vng$1@dough.gmane.org>
	<AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>
	<hv0jm3$hrq$1@dough.gmane.org>
	<AANLkTiljHHBwNejNSCJIeS_bbPcIA6f75brvnt87XyLu@mail.gmail.com>
	<hv0t3j$cnl$1@dough.gmane.org>
Message-ID: <4C142B2A.4060001@canterbury.ac.nz>

Georg Brandl wrote:

> Yes, but why does that make the current semantics broken?

The current semantics perversely make certain code useless
that would otherwise have an obvious and useful interpretation.
Maybe "suboptimal" would be a better adjective.

 >Am 12.06.2010 21:40, schrieb Bruce Frederiksen:
 >>
>>    def foo():
>>        # without an assignment to 'a', this is currently always an error!
>>        # it can only make sense if 'a' is global!
>>        a += 7
>>
>>If you can't do that, then this is a bug!
> 
> Yes, it is a bug -- a bug in your code.

But if the programmer intended a to be global, the *only*
reason it's a bug is the current somewhat arbitrary
interpretation placed on the augmented assignment.

-- 
Greg


From g.brandl at gmx.net  Sun Jun 13 12:12:52 2010
From: g.brandl at gmx.net (Georg Brandl)
Date: Sun, 13 Jun 2010 12:12:52 +0200
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <4C14241A.709@canterbury.ac.nz>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>	<4C12D738.4070906@gmail.com>	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>	<huvbsj$vng$1@dough.gmane.org>	<AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>	<hv0jm3$hrq$1@dough.gmane.org>
	<4C14241A.709@canterbury.ac.nz>
Message-ID: <hv2b1m$n45$2@dough.gmane.org>

Am 13.06.2010 02:19, schrieb Greg Ewing:
> Georg Brandl wrote:
> 
>>    a += b   is equivalent to
>>    a = a.__iadd__(b)
> 
> It's not quite the same as that, because if a stands for an
> indexed expression, the index is only evaluated once.
> 
>> With the proposal, it would be much more complicated and dependent on
>> the context: "... it's the same as <code>, but if the name would only
>> be made a local by augmented assignment statements, it's automatically
>> made nonlocal if there's a matching non-local binding, or global
>> otherwise."
> 
> It doesn't have to be as complicated as that -- you only need
> to add something like "except that if the LHS is a bare name,
> it does not imply that the name is local." Any context-dependent
> effects then follow from the existing scope rules.

They don't -- as I said in the other mail, there is no "existing scope
rule" that covers assignments that are implicitly global or nonlocal.

Georg

-- 
Thus spake the Lord: Thou shalt indent with four spaces. No more, no less.
Four shall be the number of spaces thou shalt indent, and the number of thy
indenting shall be four. Eight shalt thou not indent, nor either indent thou
two, excepting that thou then proceed to four. Tabs are right out.



From g.brandl at gmx.net  Sun Jun 13 12:16:08 2010
From: g.brandl at gmx.net (Georg Brandl)
Date: Sun, 13 Jun 2010 12:16:08 +0200
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <4C142B2A.4060001@canterbury.ac.nz>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>	<4C12D738.4070906@gmail.com>	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>	<huvbsj$vng$1@dough.gmane.org>	<AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>	<hv0jm3$hrq$1@dough.gmane.org>	<AANLkTiljHHBwNejNSCJIeS_bbPcIA6f75brvnt87XyLu@mail.gmail.com>	<hv0t3j$cnl$1@dough.gmane.org>
	<4C142B2A.4060001@canterbury.ac.nz>
Message-ID: <hv2b7r$o6s$1@dough.gmane.org>

Am 13.06.2010 02:49, schrieb Greg Ewing:
> Georg Brandl wrote:
> 
>> Yes, but why does that make the current semantics broken?
> 
> The current semantics perversely make certain code useless
> that would otherwise have an obvious and useful interpretation.

While introducing a special case.

> Maybe "suboptimal" would be a better adjective.

Yes, I wouldn't argue against that, since it allows for subjectiveness :)

>  >Am 12.06.2010 21:40, schrieb Bruce Frederiksen:
>  >>
>>>    def foo():
>>>        # without an assignment to 'a', this is currently always an error!
>>>        # it can only make sense if 'a' is global!
>>>        a += 7
>>>
>>>If you can't do that, then this is a bug!
>> 
>> Yes, it is a bug -- a bug in your code.
> 
> But if the programmer intended a to be global, the *only*
> reason it's a bug is the current somewhat arbitrary
> interpretation placed on the augmented assignment.

Hmm, I would call it consistent rather than arbitrary.

Georg


-- 
Thus spake the Lord: Thou shalt indent with four spaces. No more, no less.
Four shall be the number of spaces thou shalt indent, and the number of thy
indenting shall be four. Eight shalt thou not indent, nor either indent thou
two, excepting that thou then proceed to four. Tabs are right out.



From junkmute at hotmail.com  Sun Jun 13 14:53:44 2010
From: junkmute at hotmail.com (Demur Rumed)
Date: Sun, 13 Jun 2010 08:53:44 -0400
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <hv2b7r$o6s$1@dough.gmane.org>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
	<4C12D738.4070906@gmail.com>	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
	<huvbsj$vng$1@dough.gmane.org>
	<AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>
	<hv0jm3$hrq$1@dough.gmane.org>
	<AANLkTiljHHBwNejNSCJIeS_bbPcIA6f75brvnt87XyLu@mail.gmail.com>
	<hv0t3j$cnl$1@dough.gmane.org>, <4C142B2A.4060001@canterbury.ac.nz>,
	<hv2b7r$o6s$1@dough.gmane.org>
Message-ID: <SNT109-W572002D063EF464B91A8F5DCDB0@phx.gbl>


> > But if the programmer intended a to be global, the *only*
>
 > reason it's a bug is the current somewhat arbitrary
> > 
interpretation placed on the augmented assignment.
> 
> Hmm,
 I would call it consistent rather than arbitrary.
> 
> 
Georg

a=[1,2,3]

def f(x):a[x]=x

f(0)



Some like to think of []= as a form of augmented assignment

Currently, []= doesn't align with other augmenteds on this point

That doesn't seem very consistent. Add on that augmented
assignment is the only globalizing store statement which also
dereferences, and consistency doesn't seem to be a strong point
against this proposal
 		 	   		  
_________________________________________________________________
Regardez-moi dans les yeux : vid?obavardage GRATUIT
http://go.microsoft.com/?linkid=9734396
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100613/1334fde0/attachment.html>

From solipsis at pitrou.net  Sun Jun 13 15:08:07 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 13 Jun 2010 15:08:07 +0200
Subject: [Python-ideas] Globalize lonely augmented assignment
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
	<4C12D738.4070906@gmail.com>
	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
	<huvbsj$vng$1@dough.gmane.org>
	<AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>
	<hv0jm3$hrq$1@dough.gmane.org>
	<AANLkTiljHHBwNejNSCJIeS_bbPcIA6f75brvnt87XyLu@mail.gmail.com>
	<hv0t3j$cnl$1@dough.gmane.org> <4C142B2A.4060001@canterbury.ac.nz>
	<hv2b7r$o6s$1@dough.gmane.org>
	<SNT109-W572002D063EF464B91A8F5DCDB0@phx.gbl>
Message-ID: <20100613150807.2361888d@pitrou.net>

On Sun, 13 Jun 2010 08:53:44 -0400
Demur Rumed <junkmute at hotmail.com> wrote:
> 
> Some like to think of []= as a form of augmented assignment

"x[...] = ..." calls x.__setitem__, it has nothing do with assignment.

Regards

Antoine.




From bruce at leapyear.org  Sun Jun 13 15:15:33 2010
From: bruce at leapyear.org (Bruce Leban)
Date: Sun, 13 Jun 2010 06:15:33 -0700
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <AANLkTim0susvPsUTtEBk-G4oghbkfFRtf3uxbQYKOlS7@mail.gmail.com>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
	<4C12D738.4070906@gmail.com>
	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
	<huvbsj$vng$1@dough.gmane.org>
	<AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>
	<hv0jm3$hrq$1@dough.gmane.org>
	<AANLkTiljHHBwNejNSCJIeS_bbPcIA6f75brvnt87XyLu@mail.gmail.com>
	<hv0t3j$cnl$1@dough.gmane.org> <4C142B2A.4060001@canterbury.ac.nz>
	<hv2b7r$o6s$1@dough.gmane.org>
	<SNT109-W572002D063EF464B91A8F5DCDB0@phx.gbl>
	<AANLkTim0susvPsUTtEBk-G4oghbkfFRtf3uxbQYKOlS7@mail.gmail.com>
Message-ID: <AANLkTimyx6FVfYwh4qNIA_-FOuzk6TC3qmIijsOTEatc@mail.gmail.com>

Huh? That makes no sense.
   a[x]=x
is not
    a = a [] x
or anything like that. Language decisions shouldn't be made based on wrong
understandings of how the language works.

As to the idea of turning a guaranteed run time error into a compile time
error I'm usually in favor of that. If it doesn't muck up the compiler.

--- Bruce
(via android)

On Jun 13, 2010 5:54 AM, "Demur Rumed" <junkmute at hotmail.com> wrote:

> > But if the programmer intended a to be global, the *only*
> > reason it's a bug is the current s...
a=[1,2,3]
def f(x):a[x]=x
f(0)

Some like to think of []= as a form of augmented assignment
Currently, []= doesn't align with other augmenteds on this point
That doesn't seem very consistent. Add on that augmented
assignment is the only globalizing store statement which also
dereferences, and consistency doesn't seem to be a strong point
against this proposal

------------------------------
Jeux Messenger : mettez vos amis au d?fi! Jeux
Messenger!<http://go.microsoft.com/?linkid=9734391>

_______________________________________________
Python-ideas mailing list
Python-ideas at python.org
http://mail.python.org/mailman/listinfo/python-ideas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100613/ef7b4405/attachment.html>

From solipsis at pitrou.net  Sun Jun 13 15:36:34 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 13 Jun 2010 15:36:34 +0200
Subject: [Python-ideas] local is safer than global
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
	<4C12D738.4070906@gmail.com>
	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
Message-ID: <20100613153634.6c01f914@pitrou.net>

On Fri, 11 Jun 2010 21:18:48 -0400
Demur Rumed <junkmute at hotmail.com> wrote:
> 
> For example,
> 
> def f(x):
>     a.append(x)
>     if len(a)>5:a=[5]
> 
> If a is bound as a local, this throws an UnboundLocalError. Why then is it not set to try the global namespace, that place we cannot be so certain of the exception in?
> 
> It comes down to the view of UnboundLocalError as a feature or a bug

Certainly a feature. In case of ambiguity, a variable should be
considered local rather than global. It makes the language much safer.

It's also why I'm -1 on your proposal.

Regards

Antoine.




From cesare.di.mauro at gmail.com  Sun Jun 13 15:39:55 2010
From: cesare.di.mauro at gmail.com (Cesare Di Mauro)
Date: Sun, 13 Jun 2010 15:39:55 +0200
Subject: [Python-ideas] local is safer than global
In-Reply-To: <20100613153634.6c01f914@pitrou.net>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
	<4C12D738.4070906@gmail.com>
	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
	<20100613153634.6c01f914@pitrou.net>
Message-ID: <AANLkTilRMK_8Xc0TaVNKXNml4dZniOc8-ri_BAR0ml11@mail.gmail.com>

2010/6/13 Antoine Pitrou <solipsis at pitrou.net>

> On Fri, 11 Jun 2010 21:18:48 -0400
> Demur Rumed <junkmute at hotmail.com> wrote:
> >
> > For example,
> >
> > def f(x):
> >     a.append(x)
> >     if len(a)>5:a=[5]
> >
> > If a is bound as a local, this throws an UnboundLocalError. Why then is
> it not set to try the global namespace, that place we cannot be so certain
> of the exception in?
> >
> > It comes down to the view of UnboundLocalError as a feature or a bug
>
> Certainly a feature. In case of ambiguity, a variable should be
> considered local rather than global. It makes the language much safer.
>
> It's also why I'm -1 on your proposal.
>
> Regards
>
> Antoine.
>
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> http://mail.python.org/mailman/listinfo/python-ideas


Locals are also MUCH faster...

Cesare
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100613/bffdf3db/attachment.html>

From merwok at netwok.org  Sun Jun 13 20:39:28 2010
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Sun, 13 Jun 2010 20:39:28 +0200
Subject: [Python-ideas] reiter: decorator to make generators reiterable
In-Reply-To: <loom.20100604T011428-696@post.gmane.org>
References: <4C02857B.2030502@gmx.net>
	<loom.20100604T011428-696@post.gmane.org>
Message-ID: <4C1525E0.3050300@netwok.org>

> It looks pretty cool to me. I'll probably include it in my personal project, and 
> it would be cool if it'll be added to itertools. (Along with many other things 
> that should be added to itertools.)
> 
> Ram.

There are a collection of recipes in the docs. I think there has been a
discussion about making them available in the stdlib, but I?m not sure.
If you want to add things to itertools, open feature requests on the bug
tracker or launch a discussion thread here first.

Regards



From debatem1 at gmail.com  Sun Jun 13 23:15:20 2010
From: debatem1 at gmail.com (geremy condra)
Date: Sun, 13 Jun 2010 14:15:20 -0700
Subject: [Python-ideas] reiter: decorator to make generators reiterable
In-Reply-To: <4C1525E0.3050300@netwok.org>
References: <4C02857B.2030502@gmx.net>
	<loom.20100604T011428-696@post.gmane.org>
	<4C1525E0.3050300@netwok.org>
Message-ID: <AANLkTik2bO9jGsg2Ido12Xw-ewwM0sKTUPp9isDrWhk8@mail.gmail.com>

On Sun, Jun 13, 2010 at 11:39 AM, ?ric Araujo <merwok at netwok.org> wrote:
>> It looks pretty cool to me. I'll probably include it in my personal project, and
>> it would be cool if it'll be added to itertools. (Along with many other things
>> that should be added to itertools.)
>>
>> Ram.
>
> There are a collection of recipes in the docs. I think there has been a
> discussion about making them available in the stdlib, but I?m not sure.
> If you want to add things to itertools, open feature requests on the bug
> tracker or launch a discussion thread here first.

This would be much appreciated.

Geremy Condra


From greg.ewing at canterbury.ac.nz  Mon Jun 14 02:43:24 2010
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Mon, 14 Jun 2010 12:43:24 +1200
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <hv2b1m$n45$2@dough.gmane.org>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
	<4C12D738.4070906@gmail.com>
	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
	<huvbsj$vng$1@dough.gmane.org>
	<AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>
	<hv0jm3$hrq$1@dough.gmane.org> <4C14241A.709@canterbury.ac.nz>
	<hv2b1m$n45$2@dough.gmane.org>
Message-ID: <4C157B2C.1070006@canterbury.ac.nz>

Georg Brandl wrote:

> They don't -- as I said in the other mail, there is no "existing scope
> rule" that covers assignments that are implicitly global or nonlocal.

I don't see how you come to that conclusion. You just
need to disregard the augmented assignment and follow
the normal rules based on the presence of plain
assignments and global and nonlocal declarations.

-- 
Greg


From greg.ewing at canterbury.ac.nz  Mon Jun 14 02:43:32 2010
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Mon, 14 Jun 2010 12:43:32 +1200
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <hv2b7r$o6s$1@dough.gmane.org>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>
	<4C12D738.4070906@gmail.com>
	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>
	<huvbsj$vng$1@dough.gmane.org>
	<AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>
	<hv0jm3$hrq$1@dough.gmane.org>
	<AANLkTiljHHBwNejNSCJIeS_bbPcIA6f75brvnt87XyLu@mail.gmail.com>
	<hv0t3j$cnl$1@dough.gmane.org> <4C142B2A.4060001@canterbury.ac.nz>
	<hv2b7r$o6s$1@dough.gmane.org>
Message-ID: <4C157B34.5000802@canterbury.ac.nz>

Georg Brandl wrote:
> Am 13.06.2010 02:49, schrieb Greg Ewing:
> 
>>Maybe "suboptimal" would be a better adjective.
> 
> Yes, I wouldn't argue against that, since it allows for subjectiveness :)

All language design decisions are subjective. (If they
weren't, there would be no decision to make.)

> Hmm, I would call it consistent rather than arbitrary.

But it's a foolish consistency, IMO. It makes the description
in the manual about half a sentence shorter, at the expense of
semantics that are unintuitive and useless.

-- 
Greg



From ncoghlan at gmail.com  Mon Jun 14 04:03:58 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 14 Jun 2010 12:03:58 +1000
Subject: [Python-ideas] Globalize lonely augmented assignment
In-Reply-To: <4C157B2C.1070006@canterbury.ac.nz>
References: <SNT109-W59C6E27068C7748BEB3045DCD90@phx.gbl>	<4C12D738.4070906@gmail.com>	<SNT109-W538942AFD15280FEF79DA9DCDA0@phx.gbl>	<huvbsj$vng$1@dough.gmane.org>	<AANLkTimDY0P0y5fBJ8TTDfvQdADDEnAVTpxSg_fNosPM@mail.gmail.com>	<hv0jm3$hrq$1@dough.gmane.org>
	<4C14241A.709@canterbury.ac.nz>	<hv2b1m$n45$2@dough.gmane.org>
	<4C157B2C.1070006@canterbury.ac.nz>
Message-ID: <4C158E0E.7080600@gmail.com>

On 14/06/10 10:43, Greg Ewing wrote:
> Georg Brandl wrote:
>
>> They don't -- as I said in the other mail, there is no "existing scope
>> rule" that covers assignments that are implicitly global or nonlocal.
>
> I don't see how you come to that conclusion. You just
> need to disregard the augmented assignment and follow
> the normal rules based on the presence of plain
> assignments and global and nonlocal declarations.

What we would actually be doing is going from "augmented assignment 
determines the scope to assign to based on the same rules as normal 
assignment" to "augmented assignment determines the scope to assign to 
based on the same rules as variable referencing" (i.e. wherever we find 
the value when looking it up on the right hand side, we would put it 
back in the same place).

The semantics aren't the problem here - they can be made perfectly clear 
and reasonable. The only question is whether they are *sufficiently* 
useful to justify the effort involved in getting from the status quo to 
new (more sensible) semantics.

Keep in mind, that effort is a lot more than just a patch to CPython to 
fix our implementation, unit tests and documentation. There's also:
- doing the same thing for other implementations (e.g. Jython, 
IronPython, PyPy)
- impact on CPython branches/forks (e.g. Unladen Swallow)
- impact on Python-like languages (e.g. Cython)
- updating assorted non-PSF documentation (including books)
- updating training materials

It's for exactly these reasons that the language moratorium has been put 
in place: so we can't even be *tempted* by this kind of change until 
everyone has had a chance to at least catch up to the 3.2 state of the 
world.

So, for this suggestion to go any further, it will need:
- a PEP (one that acknowledges this is a post-moratorium change)
- solid examples of real-world code that would be improved by this (e.g. 
from the standard library, or from major third party Python applications)

Since the most this will save anyone is the occasional global or 
nonlocal statement, I suspect the second point is going to being a 
difficult bar to achieve.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------


From alexander.belopolsky at gmail.com  Tue Jun 15 02:09:02 2010
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Mon, 14 Jun 2010 20:09:02 -0400
Subject: [Python-ideas] Rename time module to "posixtime"
Message-ID: <AANLkTikfP28a0ZIk-IeDiFMfnyMxlX9PDKabD5HG8XrT@mail.gmail.com>

One of the common complains about working with time values in Python,
is that it some functionality is available in time module, some in
datetime module and some in both.

I propose a series of steps towards improving this situation.

1. Create posixtime.py initially containing just  "from time import *"
2. Add python implementation of time.* functions to posixtime.py.
3. Rename time module to _posixtime and add time.py with a deprecation
warning and "from _posixtime import *".

Note that #2 may require to move some code from timemodule.c to
datetimemodule.c, but at the binary level code compiled from these
files is already linked together in datetimemodule.  Moving the
necessary code to datetime.c will help to eliminate current circular
dependency between time and datetime.


From mal at egenix.com  Tue Jun 15 10:07:27 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Tue, 15 Jun 2010 10:07:27 +0200
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <AANLkTikfP28a0ZIk-IeDiFMfnyMxlX9PDKabD5HG8XrT@mail.gmail.com>
References: <AANLkTikfP28a0ZIk-IeDiFMfnyMxlX9PDKabD5HG8XrT@mail.gmail.com>
Message-ID: <4C1734BF.6050902@egenix.com>

Alexander Belopolsky wrote:
> One of the common complains about working with time values in Python,
> is that it some functionality is available in time module, some in
> datetime module and some in both.
> 
> I propose a series of steps towards improving this situation.
> 
> 1. Create posixtime.py initially containing just  "from time import *"
> 2. Add python implementation of time.* functions to posixtime.py.
> 3. Rename time module to _posixtime and add time.py with a deprecation
> warning and "from _posixtime import *".
> 
> Note that #2 may require to move some code from timemodule.c to
> datetimemodule.c, but at the binary level code compiled from these
> files is already linked together in datetimemodule.  Moving the
> necessary code to datetime.c will help to eliminate current circular
> dependency between time and datetime.

I'm not sure I understand the point in renaming the module.

Note that the time module works based on Unix ticks (seconds
since the Unix Epoch), while the datetime module works based
on its own set of types.

As such, the two are different implementations for managing
date/time. Mixing them won't make things easier to understand.
The time module is very close to the C lib API, while the datetime
module focuses more on date/time storage in a more accessible.

I agree on one point, though: the shared C APIs for getting the
current time would be better put into a separate C extension
which both can then load without creating circular references,
e.g. _getcurrenttime.c.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 15 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                33 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From solipsis at pitrou.net  Tue Jun 15 12:16:26 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 15 Jun 2010 12:16:26 +0200
Subject: [Python-ideas] Rename time module to "posixtime"
References: <AANLkTikfP28a0ZIk-IeDiFMfnyMxlX9PDKabD5HG8XrT@mail.gmail.com>
Message-ID: <20100615121626.503d6318@pitrou.net>

On Mon, 14 Jun 2010 20:09:02 -0400
Alexander Belopolsky
<alexander.belopolsky at gmail.com> wrote:
> One of the common complains about working with time values in Python,
> is that it some functionality is available in time module, some in
> datetime module and some in both.

Is it a common complaint, really?
The common complaint, IMO, is that *none* of those two modules provides
a complete feature set in itself.

> I propose a series of steps towards improving this situation.
> 
> 1. Create posixtime.py initially containing just  "from time import *"
> 2. Add python implementation of time.* functions to posixtime.py.
> 3. Rename time module to _posixtime and add time.py with a deprecation
> warning and "from _posixtime import *".

I don't understand the purpose. I certainly like time.time() and I
don't see the point of making it go away (will we have to use one of
these "obvious" datetime-based one-liners instead?).

Regards

Antoine.




From merwok at netwok.org  Tue Jun 15 12:29:41 2010
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Tue, 15 Jun 2010 12:29:41 +0200
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <20100615121626.503d6318@pitrou.net>
References: <AANLkTikfP28a0ZIk-IeDiFMfnyMxlX9PDKabD5HG8XrT@mail.gmail.com>
	<20100615121626.503d6318@pitrou.net>
Message-ID: <4C175615.7070309@netwok.org>

Le 15/06/2010 12:16, Antoine Pitrou a ?crit :
> On Mon, 14 Jun 2010 20:09:02 -0400 Alexander Belopolsky wrote:
>> One of the common complains about working with time values in Python,
>> is that it some functionality is available in time module, some in
>> datetime module and some in both.
> 
> Is it a common complaint, really?
> The common complaint, IMO, is that *none* of those two modules provides
> a complete feature set in itself.

The fact that we need dateutil or pytz to do some calculations is not
optimal but it?s another concern. I agree that the overlap between time,
datetime and calendar is annoying. More specifically, the multitude of
types is bad (integer timestamp, time tuple, datetime object). Some bad
one-liners that use some datetimes methods with unpacked (*arg) time
tuples coming from another datetime method scream ?shenanigans? to me.

> I don't understand the purpose. I certainly like time.time() and I
> don't see the point of making it go away (will we have to use one of
> these "obvious" datetime-based one-liners instead?).

time.time will still be available, just under another name that makes it
clear it?s a binding to a low-level C concept.

A user vote: +1 on renaming, +1 on improving datetime.

Regards



From alexander.belopolsky at gmail.com  Tue Jun 15 16:47:12 2010
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Tue, 15 Jun 2010 10:47:12 -0400
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <4C1734BF.6050902@egenix.com>
References: <AANLkTikfP28a0ZIk-IeDiFMfnyMxlX9PDKabD5HG8XrT@mail.gmail.com>
	<4C1734BF.6050902@egenix.com>
Message-ID: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>

On Tue, Jun 15, 2010 at 4:07 AM, M.-A. Lemburg <mal at egenix.com> wrote:
> Alexander Belopolsky wrote:
>> One of the common complains about working with time values in Python,
>> is that it some functionality is available in time module, some in
>> datetime module and some in both.
..
> I'm not sure I understand the point in renaming the module.

I've reread my post and have to admit that I did not explain this
point clearly.  There are currently three different ways to represent
a point in time: datetime object, unix timestamp, and a 9-element time
tuple.  While the datetime module has its share of criticism, its
interfaces are more user friendly and more "pythonic" than C inspired
time module interfaces.

For example,

>>> print(datetime.now())

is self-explainatory but

>>> time.time()
1276609479.559051

requires a lot of explaining even to people with C/POSIX background.
For the later, the immediate questions would be why the output is a
float and what is the precision of the result.  For people without C
background, time module interfaces are cryptic and arbitrary.  Why
time() produces a float while localtime() produces a tuple?  Why
asctime takes tuple while ctime takes float?

Conversions between timestamp/timetuple and datetime are quite awkward
as well. We have datetime.timetuple(), but no fromtimetuple() (you
have to write cryptic datetime(*tt[:6]).  With timestamps, it is the
opposite.  We have a full compliment of
fromtimestamp/utcfromtimestamp, but no functions to go in the opposite
direction.

Finally, we have a 3-way name conflict: time is a module, a function and a type.

I believe most of applications are better off using datetime module
exclusively.  The time module should be used for interoperability with
POSIX interfaces, but not for general date/time manipulations.
Renaming the module will
make its name match its purpose better.

> Note that the time module works based on Unix ticks (seconds
> since the Unix Epoch), while the datetime module works based
> on its own set of types.
>
I certainly know that.  What some people don't understand, though is
that translation between Unix ticks (or more accurately POSIX time_t
value) and broken down UTC time is just an arithmetic operation.  The
formula is convoluted, but it is just a formula independent of any
system databases.  There is no good reason for a python application to
keep time values as POSIX timestamps rather than datetime objects.
The correspondence between the two is one to one, the ordering is the
same and arithmetics is cleaner with datetime because it is explicit
about (decimal) precision.

> As such, the two are different implementations for managing
> date/time. Mixing them won't make things easier to understand.
> The time module is very close to the C lib API, while the datetime
> module focuses more on date/time storage in a more accessible.
>

I am not proposing mixing them.  To the contrary, I want to make it
clearer that users should not mix them: use posixtime module if you
need to interoperate with posix interfaces and datetime for everything
else.

> I agree on one point, though: the shared C APIs for getting the
> current time would be better put into a separate C extension
> which both can then load without creating circular references,
> e.g. _getcurrenttime.c.

What I would like to do is to expose POSIX gettimeofday interface as
both C API and python function returning (seconds, microseconds)
tuple.  In my view, C implementation should go to _posixtimemodule.c
and posixtime.py should have

def gettimeofday():
     q, r = divmod(datetime.utcnow() - datetime(1970,1,1), timedelta(seconds=1))
     return q, r.microseconds


From cs at zip.com.au  Wed Jun 16 01:01:02 2010
From: cs at zip.com.au (Cameron Simpson)
Date: Wed, 16 Jun 2010 09:01:02 +1000
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>
Message-ID: <20100615230102.GA8439@cskk.homeip.net>

On 15Jun2010 10:47, Alexander Belopolsky <alexander.belopolsky at gmail.com> wrote:
| On Tue, Jun 15, 2010 at 4:07 AM, M.-A. Lemburg <mal at egenix.com> wrote:
| > Alexander Belopolsky wrote:
| >> One of the common complains about working with time values in Python,
| >> is that it some functionality is available in time module, some in
| >> datetime module and some in both.
| ..
| > I'm not sure I understand the point in renaming the module.
| 
| I've reread my post and have to admit that I did not explain this
| point clearly.  There are currently three different ways to represent
| a point in time: datetime object, unix timestamp, and a 9-element time
| tuple.  While the datetime module has its share of criticism, its
| interfaces are more user friendly and more "pythonic" than C inspired
| time module interfaces.

Personally, I would be happy to see unix-timestamp and datetime object,
and see the time tuples go away.

The tuples are a direct mirror of the unix "struct tm" structures and and
should really only be visible in a "posixtime" module of some kind - the
datetime objects are their direct equivalents anyway to my mind and should be
what are dealt with for human calendar stuff.

However, the unix timestamps should stay (or anything equivalent that
measures real world seconds, but since any epoch will do for that purpose
and we've got the unix one in use I'd stay with it). They represent an
absolute timeframe and let one do direct arithmetic. If I'm not doing
calendar things (or only doing them for presentation) then the unix
timestamp is usually my preferred time item.

| Conversions between timestamp/timetuple and datetime are quite awkward
| as well. We have datetime.timetuple(), but no fromtimetuple() (you
| have to write cryptic datetime(*tt[:6]).  With timestamps, it is the
| opposite.  We have a full compliment of
| fromtimestamp/utcfromtimestamp, but no functions to go in the opposite
| direction.

Yes, awful. Having spent a fair chunk of yesterday trying to obtain an
adapter (or chain of adapters) to join a 3G modem to an antenna with a
different end, I feel your pain. And I've felt it with the time
functions too.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

A man with one watch knows what time it is; a man with two watches is
never sure. - Lee Segall


From brett at python.org  Wed Jun 16 08:03:06 2010
From: brett at python.org (Brett Cannon)
Date: Tue, 15 Jun 2010 23:03:06 -0700
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <20100615230102.GA8439@cskk.homeip.net>
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com> 
	<20100615230102.GA8439@cskk.homeip.net>
Message-ID: <AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com>

On Tue, Jun 15, 2010 at 16:01, Cameron Simpson <cs at zip.com.au> wrote:
> On 15Jun2010 10:47, Alexander Belopolsky <alexander.belopolsky at gmail.com> wrote:
> | On Tue, Jun 15, 2010 at 4:07 AM, M.-A. Lemburg <mal at egenix.com> wrote:
> | > Alexander Belopolsky wrote:
> | >> One of the common complains about working with time values in Python,
> | >> is that it some functionality is available in time module, some in
> | >> datetime module and some in both.
> | ..
> | > I'm not sure I understand the point in renaming the module.
> |
> | I've reread my post and have to admit that I did not explain this
> | point clearly. ?There are currently three different ways to represent
> | a point in time: datetime object, unix timestamp, and a 9-element time
> | tuple. ?While the datetime module has its share of criticism, its
> | interfaces are more user friendly and more "pythonic" than C inspired
> | time module interfaces.
>
> Personally, I would be happy to see unix-timestamp and datetime object,
> and see the time tuples go away.
>
> The tuples are a direct mirror of the unix "struct tm" structures and and
> should really only be visible in a "posixtime" module of some kind - the
> datetime objects are their direct equivalents anyway to my mind and should be
> what are dealt with for human calendar stuff.
>
> However, the unix timestamps should stay (or anything equivalent that
> measures real world seconds, but since any epoch will do for that purpose
> and we've got the unix one in use I'd stay with it). They represent an
> absolute timeframe and let one do direct arithmetic. If I'm not doing
> calendar things (or only doing them for presentation) then the unix
> timestamp is usually my preferred time item.

I agree with this sentiment. The UNIX timestamp stuff should stay in
time, the time tuple stuff should just go, and datetime should be
fleshed out to handle all the stuff that is not a direct wrapping
around libc. That way people deal with accurate datetimes as well as
well understood concepts with UNIX timestamps and datetime objects.
Time tuples are just not accurate enough.

Datetime objects can keep the ability to export and import from time
tuples for extensions that need to interface with 'struct tm' code,
but otherwise it should be considered a lossy datetime encoding that
we do not really support else we are going to constantly be trying to
fix the time tuple to be more accurate when it was simply just not
well designed.


From mal at egenix.com  Wed Jun 16 09:56:08 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Wed, 16 Jun 2010 09:56:08 +0200
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com>
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>
	<20100615230102.GA8439@cskk.homeip.net>
	<AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com>
Message-ID: <4C188398.9010706@egenix.com>

Brett Cannon wrote:
> On Tue, Jun 15, 2010 at 16:01, Cameron Simpson <cs at zip.com.au> wrote:
>> On 15Jun2010 10:47, Alexander Belopolsky <alexander.belopolsky at gmail.com> wrote:
>> | On Tue, Jun 15, 2010 at 4:07 AM, M.-A. Lemburg <mal at egenix.com> wrote:
>> | > Alexander Belopolsky wrote:
>> | >> One of the common complains about working with time values in Python,
>> | >> is that it some functionality is available in time module, some in
>> | >> datetime module and some in both.
>> | ..
>> | > I'm not sure I understand the point in renaming the module.
>> |
>> | I've reread my post and have to admit that I did not explain this
>> | point clearly.  There are currently three different ways to represent
>> | a point in time: datetime object, unix timestamp, and a 9-element time
>> | tuple.  While the datetime module has its share of criticism, its
>> | interfaces are more user friendly and more "pythonic" than C inspired
>> | time module interfaces.
>>
>> Personally, I would be happy to see unix-timestamp and datetime object,
>> and see the time tuples go away.
>>
>> The tuples are a direct mirror of the unix "struct tm" structures and and
>> should really only be visible in a "posixtime" module of some kind - the
>> datetime objects are their direct equivalents anyway to my mind and should be
>> what are dealt with for human calendar stuff.
>>
>> However, the unix timestamps should stay (or anything equivalent that
>> measures real world seconds, but since any epoch will do for that purpose
>> and we've got the unix one in use I'd stay with it). They represent an
>> absolute timeframe and let one do direct arithmetic. If I'm not doing
>> calendar things (or only doing them for presentation) then the unix
>> timestamp is usually my preferred time item.
> 
> I agree with this sentiment. The UNIX timestamp stuff should stay in
> time, the time tuple stuff should just go, and datetime should be
> fleshed out to handle all the stuff that is not a direct wrapping
> around libc. That way people deal with accurate datetimes as well as
> well understood concepts with UNIX timestamps and datetime objects.
> Time tuples are just not accurate enough.
>
> Datetime objects can keep the ability to export and import from time
> tuples for extensions that need to interface with 'struct tm' code,
> but otherwise it should be considered a lossy datetime encoding that
> we do not really support else we are going to constantly be trying to
> fix the time tuple to be more accurate when it was simply just not
> well designed.

-1.

Please note that the time module provides access to low-level OS
provided services which the datetime module does not expose.

You cannot seriously expect that an application which happily uses
the time module (only) for its limited date/time functionality
to have to be rewritten just to stay compatible with Python.

Note that not all applications are interested in sub-second
accuracy and a computer without properly configured NTP and good
internal clock doesn't even provide this accuracy to begin with
(even if they happily pretend they do by exposing sub-second
floats).

You might want to do that for Python4 and then add all those
time module functions using struct_time to the datetime
module (returning datetime instances), but for Python3, we've
had the stdlib reorg already.

Renaming time -> posixtime falls into the same category.

The only improvement I could see, would be to move
calendar.timegm() to the time module, since that's where
it belongs (keeping an alias in the calendar module, of
course).

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 16 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                32 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From solipsis at pitrou.net  Wed Jun 16 11:45:44 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 16 Jun 2010 11:45:44 +0200
Subject: [Python-ideas] Rename time module to "posixtime"
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>
	<20100615230102.GA8439@cskk.homeip.net>
	<AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com>
Message-ID: <20100616114544.14696040@pitrou.net>

On Tue, 15 Jun 2010 23:03:06 -0700
Brett Cannon <brett at python.org> wrote:
> 
> I agree with this sentiment. The UNIX timestamp stuff should stay in
> time, the time tuple stuff should just go, and datetime should be
> fleshed out to handle all the stuff that is not a direct wrapping
> around libc. That way people deal with accurate datetimes as well as
> well understood concepts with UNIX timestamps and datetime objects.

Agreed.

What? We all agree?




From mal at egenix.com  Wed Jun 16 11:59:20 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Wed, 16 Jun 2010 11:59:20 +0200
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <20100616114544.14696040@pitrou.net>
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>	<20100615230102.GA8439@cskk.homeip.net>	<AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com>
	<20100616114544.14696040@pitrou.net>
Message-ID: <4C18A078.8060001@egenix.com>

Antoine Pitrou wrote:
> On Tue, 15 Jun 2010 23:03:06 -0700
> Brett Cannon <brett at python.org> wrote:
>>
>> I agree with this sentiment. The UNIX timestamp stuff should stay in
>> time, the time tuple stuff should just go, and datetime should be
>> fleshed out to handle all the stuff that is not a direct wrapping
>> around libc. That way people deal with accurate datetimes as well as
>> well understood concepts with UNIX timestamps and datetime objects.
> 
> Agreed.
> 
> What? We all agree?

I don't :-)

We've done the stdlib reorg already, now it's time to focus on
improving what's there, not removing things.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 16 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                32 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From fuzzyman at voidspace.org.uk  Wed Jun 16 13:09:51 2010
From: fuzzyman at voidspace.org.uk (Michael Foord)
Date: Wed, 16 Jun 2010 12:09:51 +0100
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <4C18A078.8060001@egenix.com>
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>
	<20100615230102.GA8439@cskk.homeip.net>
	<AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com>
	<20100616114544.14696040@pitrou.net> <4C18A078.8060001@egenix.com>
Message-ID: <AANLkTimxUP1mdLsI7sfFr_jweUhfMUopoooUIC8jEfJC@mail.gmail.com>

On 16 June 2010 10:59, M.-A. Lemburg <mal at egenix.com> wrote:

> Antoine Pitrou wrote:
> > On Tue, 15 Jun 2010 23:03:06 -0700
> > Brett Cannon <brett at python.org> wrote:
> >>
> >> I agree with this sentiment. The UNIX timestamp stuff should stay in
> >> time, the time tuple stuff should just go, and datetime should be
> >> fleshed out to handle all the stuff that is not a direct wrapping
> >> around libc. That way people deal with accurate datetimes as well as
> >> well understood concepts with UNIX timestamps and datetime objects.
> >
> > Agreed.
> >
> > What? We all agree?
>
> I don't :-)
>
> We've done the stdlib reorg already, now it's time to focus on
> improving what's there, not removing things.
>


The standard library will continue to evolve in Python 3 though, with both
additions and deprecations - following our standard deprecation policy of
course.

Michael


>
> --
> Marc-Andre Lemburg
> eGenix.com
>
> Professional Python Services directly from the Source  (#1, Jun 16 2010)
> >>> Python/Zope Consulting and Support ...        http://www.egenix.com/
> >>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
> >>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
> ________________________________________________________________________
> 2010-07-19: EuroPython 2010, Birmingham, UK                32 days to go
>
> ::: Try our new mxODBC.Connect Python Database Interface for free ! ::::
>
>
>   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
>    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
>           Registered at Amtsgericht Duesseldorf: HRB 46611
>               http://www.egenix.com/company/contact/
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> http://mail.python.org/mailman/listinfo/python-ideas
>



-- 
http://www.voidspace.org.uk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100616/e1931344/attachment.html>

From solipsis at pitrou.net  Wed Jun 16 13:25:32 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 16 Jun 2010 13:25:32 +0200
Subject: [Python-ideas] Rename time module to "posixtime"
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>
	<20100615230102.GA8439@cskk.homeip.net>
	<AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com>
	<20100616114544.14696040@pitrou.net> <4C18A078.8060001@egenix.com>
Message-ID: <20100616132532.3f66d618@pitrou.net>

On Wed, 16 Jun 2010 11:59:20 +0200
"M.-A. Lemburg" <mal at egenix.com> wrote:
> Antoine Pitrou wrote:
> > On Tue, 15 Jun 2010 23:03:06 -0700
> > Brett Cannon <brett at python.org> wrote:
> >>
> >> I agree with this sentiment. The UNIX timestamp stuff should stay in
> >> time, the time tuple stuff should just go, and datetime should be
> >> fleshed out to handle all the stuff that is not a direct wrapping
> >> around libc. That way people deal with accurate datetimes as well as
> >> well understood concepts with UNIX timestamps and datetime objects.
> > 
> > Agreed.
> > 
> > What? We all agree?
> 
> I don't :-)
> 
> We've done the stdlib reorg already, now it's time to focus on
> improving what's there, not removing things.

Well, I agree that adding functionality is what's mostly needed in the
date/time area right now. Removing old stuff should be quite low
priority compared to that.

(perhaps I should stop agreeing with everyone, sorry :-))

Regards

Antoine.




From alexander.belopolsky at gmail.com  Wed Jun 16 15:44:59 2010
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Wed, 16 Jun 2010 09:44:59 -0400
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <20100616132532.3f66d618@pitrou.net>
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>
	<20100615230102.GA8439@cskk.homeip.net>
	<AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com>
	<20100616114544.14696040@pitrou.net> <4C18A078.8060001@egenix.com>
	<20100616132532.3f66d618@pitrou.net>
Message-ID: <AANLkTilHMkXwvCN1ji4a4t0yG0oT2-pwOeoFqRp89al4@mail.gmail.com>

On Wed, Jun 16, 2010 at 7:25 AM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Wed, 16 Jun 2010 11:59:20 +0200
> "M.-A. Lemburg" <mal at egenix.com> wrote:
>> Antoine Pitrou wrote:
>> > On Tue, 15 Jun 2010 23:03:06 -0700
>> > Brett Cannon <brett at python.org> wrote:
>> >>
>> >> I agree with this sentiment. The UNIX timestamp stuff should stay in
>> >> time, the time tuple stuff should just go, and datetime should be
>> >> fleshed out to handle all the stuff that is not a direct wrapping
>> >> around libc. That way people deal with accurate datetimes as well as
>> >> well understood concepts with UNIX timestamps and datetime objects.
>> >
>> > Agreed.
>> >
>> > What? We all agree?
>>
>> I don't :-)
>>
>> We've done the stdlib reorg already, now it's time to focus on
>> improving what's there, not removing things.

I don't either. :-)

I am not proposing to eliminate any functionality.  My proposal is
primarily driven by the desire to untangle low level circular
dependency between time and datetime modules and to clarify the
purpose of keeping functionality in time module that duplicates that
in datetime.

Another part of my proposal is to provide pure python implementation
for time module functions in terms of datetime API.  This will serve
as both executable documentation and best practices guide.  (Assuming
the best practice is to use datetime module exclusively.)

Let me repeat the three step proposal:

1. Create posixtime.py initially containing just  "from time import *"
2. Add python implementation of time.* functions to posixtime.py.
3. Rename time module to _posixtime and add time.py with a deprecation
warning and "from _posixtime import *".

I would not mind keeping time.py indefinitely with or without
deprecation warnings.


From merwok at netwok.org  Wed Jun 16 15:54:45 2010
From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=)
Date: Wed, 16 Jun 2010 15:54:45 +0200
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <AANLkTilHMkXwvCN1ji4a4t0yG0oT2-pwOeoFqRp89al4@mail.gmail.com>
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>	<20100615230102.GA8439@cskk.homeip.net>	<AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com>	<20100616114544.14696040@pitrou.net>
	<4C18A078.8060001@egenix.com>	<20100616132532.3f66d618@pitrou.net>
	<AANLkTilHMkXwvCN1ji4a4t0yG0oT2-pwOeoFqRp89al4@mail.gmail.com>
Message-ID: <4C18D7A5.1090609@netwok.org>

> I would not mind keeping time.py indefinitely with or without
> deprecation warnings.

As long as the documentation points beginners to datetime, and people
looking for unixy API to posixtime, I agree that deprecation warnings
are not mandatory.



From bruce at leapyear.org  Wed Jun 16 16:48:23 2010
From: bruce at leapyear.org (Bruce Leban)
Date: Wed, 16 Jun 2010 07:48:23 -0700
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <AANLkTinQIrn-Z6Q2eLz4Gv-yL4nzD_hCrhMDls7PZ0vF@mail.gmail.com>
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>
	<20100615230102.GA8439@cskk.homeip.net>
	<AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com>
	<4C188398.9010706@egenix.com>
	<AANLkTinQIrn-Z6Q2eLz4Gv-yL4nzD_hCrhMDls7PZ0vF@mail.gmail.com>
Message-ID: <AANLkTin7_yb_QfL8OuUa1t7x6C0AWbP3RPicaxu9hDu8@mail.gmail.com>

-1 to moving anything

The situation is confusing and moving things will add to that confusion for
a significant length of time.

What I would instead suggest is improving the docs. If I could look in one
place to find any time function it would mitigate the fact that they're
implemented in multiple places.

--- Bruce
(via android)

On Jun 16, 2010 12:56 AM, "M.-A. Lemburg" <mal at egenix.com> wrote:

Brett Cannon wrote:
> On Tue, Jun 15, 2010 at 16:01, Cameron Simpson <cs at zip.com.au> wrote:
>> On 15...
-1.

Please note that the time module provides access to low-level OS
provided services which the datetime module does not expose.

You cannot seriously expect that an application which happily uses
the time module (only) for its limited date/time functionality
to have to be rewritten just to stay compatible with Python.

Note that not all applications are interested in sub-second
accuracy and a computer without properly configured NTP and good
internal clock doesn't even provide this accuracy to begin with
(even if they happily pretend they do by exposing sub-second
floats).

You might want to do that for Python4 and then add all those
time module functions using struct_time to the datetime
module (returning datetime instances), but for Python3, we've
had the stdlib reorg already.

Renaming time -> posixtime falls into the same category.

The only improvement I could see, would be to move
calendar.timegm() to the time module, since that's where
it belongs (keeping an alias in the calendar module, of
course).


-- 
Marc-Andre Lemburg
eGenix.com
Professional Python Services directly from the Source  (#1, Jun 16 2010)

>>> Python/Zope Consulting and Support ... http://www.egenix.com/
>>> mxODBC.Zope.Database.Ad...
2010-07-19: EuroPython 2010, Birmingham, UK                32 days to go


::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


eGenix.com Software, ...

Python-ideas mailing list
Python-ideas at python.org
http://mail.python.org/mailman/listinfo/python-ide...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100616/8f3a405f/attachment.html>

From alexander.belopolsky at gmail.com  Wed Jun 16 17:25:59 2010
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Wed, 16 Jun 2010 11:25:59 -0400
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <AANLkTin7_yb_QfL8OuUa1t7x6C0AWbP3RPicaxu9hDu8@mail.gmail.com>
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>
	<20100615230102.GA8439@cskk.homeip.net>
	<AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com>
	<4C188398.9010706@egenix.com>
	<AANLkTinQIrn-Z6Q2eLz4Gv-yL4nzD_hCrhMDls7PZ0vF@mail.gmail.com>
	<AANLkTin7_yb_QfL8OuUa1t7x6C0AWbP3RPicaxu9hDu8@mail.gmail.com>
Message-ID: <AANLkTikVimsQFKGFoY7VC5uW2sHciFYpf4b1CiMrpfJm@mail.gmail.com>

On Wed, Jun 16, 2010 at 10:48 AM, Bruce Leban <bruce at leapyear.org> wrote:
> -1 to moving anything
>

I am getting a feeling that you are attacking a strawman.  Deprecating
time module in favor of posixtime is only the third part of my
proposal and I don't insist on any particular depreciation schedule.
All I want is to give users one obvious way to avoid conflict between
time and datetime.time.  Note that since datetime only defines a
handful of module level symbols, it is quite common to see from
datetime import date, datetime, time and it is quite confusing when
this conflicts with import time.

> The situation is confusing and moving things will add to that confusion for
> a significant length of time.
>
Let's be constructive.  What specifically do you find confusing?  Do
you agree with the list of confusing things that I listed in my
previous posts?

> What I would instead suggest is improving the docs. If I could look in one
> place to find any time function it would mitigate the fact that they're
> implemented in multiple places.

I think having datetime.datetime.strftime and time.strftime documented
in one place will not help anyone.  And I am not even mentioning
datetime.time.strftime which is of course not the same as
time.strftime and that for {date,datetime,time}.strftime function you
need to specify date/time object first and format last but for time
module strftime it is the other way around. Etc. etc.

I think most users will be happier not knowing about time module
strftime function.  Even better not knowing about strftime at all and
using "..".format(dt) instead.

And where in the docs would you explain the following: :-)

>>> from datetime import datetime
>>> import time
>>> time.strftime("%c %z %Z", datetime.utcnow().utctimetuple())
'Wed Jun 16 15:22:15 2010 -0500 EST'

(Note utc in datetime calls and EST in time.strftime output.)


From orsenthil at gmail.com  Wed Jun 16 19:10:00 2010
From: orsenthil at gmail.com (Senthil Kumaran)
Date: Wed, 16 Jun 2010 22:40:00 +0530
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <AANLkTilHMkXwvCN1ji4a4t0yG0oT2-pwOeoFqRp89al4@mail.gmail.com>
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>
	<20100615230102.GA8439@cskk.homeip.net>
	<AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com>
	<20100616114544.14696040@pitrou.net> <4C18A078.8060001@egenix.com>
	<20100616132532.3f66d618@pitrou.net>
	<AANLkTilHMkXwvCN1ji4a4t0yG0oT2-pwOeoFqRp89al4@mail.gmail.com>
Message-ID: <20100616171000.GA5731@remy>

On Wed, Jun 16, 2010 at 09:44:59AM -0400, Alexander Belopolsky wrote:
> 
> I am not proposing to eliminate any functionality.  My proposal is
> primarily driven by the desire to untangle low level circular
> dependency between time and datetime modules and to clarify the
> purpose of keeping functionality in time module that duplicates that
> in datetime.
> 
> Another part of my proposal is to provide pure python implementation
> for time module functions in terms of datetime API.  This will serve
> as both executable documentation and best practices guide.  (Assuming
> the best practice is to use datetime module exclusively.)
> 

This is a clear idea you are having for datetime + (possibly a) pure
python posixtime module. The reference implementation as well as
documentation will definitely be beneficial in the long run.

So, +1 for your proposal.

> I would not mind keeping time.py indefinitely with or without
> deprecation warnings.

Yeah, this would ensure the backwards compatibility.

-- 
Senthil

Have a place for everything and keep the thing somewhere else; this is not
advice, it is merely custom.
		-- Mark Twain


From brett at python.org  Wed Jun 16 19:37:42 2010
From: brett at python.org (Brett Cannon)
Date: Wed, 16 Jun 2010 10:37:42 -0700
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <4C188398.9010706@egenix.com>
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com> 
	<20100615230102.GA8439@cskk.homeip.net>
	<AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com> 
	<4C188398.9010706@egenix.com>
Message-ID: <AANLkTik99USbgzSSRcdgqNNQdqoZcvVu5BRridokB7cC@mail.gmail.com>

On Wed, Jun 16, 2010 at 00:56, M.-A. Lemburg <mal at egenix.com> wrote:
> Brett Cannon wrote:
>> On Tue, Jun 15, 2010 at 16:01, Cameron Simpson <cs at zip.com.au> wrote:
>>> On 15Jun2010 10:47, Alexander Belopolsky <alexander.belopolsky at gmail.com> wrote:
>>> | On Tue, Jun 15, 2010 at 4:07 AM, M.-A. Lemburg <mal at egenix.com> wrote:
>>> | > Alexander Belopolsky wrote:
>>> | >> One of the common complains about working with time values in Python,
>>> | >> is that it some functionality is available in time module, some in
>>> | >> datetime module and some in both.
>>> | ..
>>> | > I'm not sure I understand the point in renaming the module.
>>> |
>>> | I've reread my post and have to admit that I did not explain this
>>> | point clearly. ?There are currently three different ways to represent
>>> | a point in time: datetime object, unix timestamp, and a 9-element time
>>> | tuple. ?While the datetime module has its share of criticism, its
>>> | interfaces are more user friendly and more "pythonic" than C inspired
>>> | time module interfaces.
>>>
>>> Personally, I would be happy to see unix-timestamp and datetime object,
>>> and see the time tuples go away.
>>>
>>> The tuples are a direct mirror of the unix "struct tm" structures and and
>>> should really only be visible in a "posixtime" module of some kind - the
>>> datetime objects are their direct equivalents anyway to my mind and should be
>>> what are dealt with for human calendar stuff.
>>>
>>> However, the unix timestamps should stay (or anything equivalent that
>>> measures real world seconds, but since any epoch will do for that purpose
>>> and we've got the unix one in use I'd stay with it). They represent an
>>> absolute timeframe and let one do direct arithmetic. If I'm not doing
>>> calendar things (or only doing them for presentation) then the unix
>>> timestamp is usually my preferred time item.
>>
>> I agree with this sentiment. The UNIX timestamp stuff should stay in
>> time, the time tuple stuff should just go, and datetime should be
>> fleshed out to handle all the stuff that is not a direct wrapping
>> around libc. That way people deal with accurate datetimes as well as
>> well understood concepts with UNIX timestamps and datetime objects.
>> Time tuples are just not accurate enough.
>>
>> Datetime objects can keep the ability to export and import from time
>> tuples for extensions that need to interface with 'struct tm' code,
>> but otherwise it should be considered a lossy datetime encoding that
>> we do not really support else we are going to constantly be trying to
>> fix the time tuple to be more accurate when it was simply just not
>> well designed.
>
> -1.
>
> Please note that the time module provides access to low-level OS
> provided services which the datetime module does not expose.
>
> You cannot seriously expect that an application which happily uses
> the time module (only) for its limited date/time functionality
> to have to be rewritten just to stay compatible with Python.

No, but the work to move people off of time tuples and over to
datetime objects or timestamps can start so that the next stdlib reorg
can drop time tuples without causing major pains.

>
> Note that not all applications are interested in sub-second
> accuracy and a computer without properly configured NTP and good
> internal clock doesn't even provide this accuracy to begin with
> (even if they happily pretend they do by exposing sub-second
> floats).
>
> You might want to do that for Python4 and then add all those
> time module functions using struct_time to the datetime
> module (returning datetime instances), but for Python3, we've
> had the stdlib reorg already.
>
> Renaming time -> posixtime falls into the same category.
>

I don't care as much about the rename as I do about losing time tuples
in the long run.

> The only improvement I could see, would be to move
> calendar.timegm() to the time module, since that's where
> it belongs (keeping an alias in the calendar module, of
> course).

That should definitely happen at some point.


From alexander.belopolsky at gmail.com  Wed Jun 16 20:14:46 2010
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Wed, 16 Jun 2010 14:14:46 -0400
Subject: [Python-ideas] Moving calendar.timegm() to time module Was: Rename
	time module to "posixtime"
Message-ID: <AANLkTineFpZmlF7IkhNJpzoMlS7Vhsku8SE79t36K9qL@mail.gmail.com>

On Wed, Jun 16, 2010 at 1:37 PM, Brett Cannon <brett at python.org> wrote:
..
>> The only improvement I could see, would be to move
>> calendar.timegm() to the time module, since that's where
>> it belongs (keeping an alias in the calendar module, of
>> course).
>
> That should definitely happen at some point.
>

This is discussed in Issue 6280 <http://bugs.python.org/issue6280>.
There are several issues with this proposal:

1. According to help(time),

"""
    The Epoch is system-defined; on Unix, it is generally January 1st, 1970.
    The actual value can be retrieved by calling gmtime(0).
"""

Current calendar.gmtime implementation ignores this.  The solution to
this, may be to change help(time), though.

2. Current calendar.gmtime supports float values for hours, minutes
and seconds in timedelta tuple.  This is probably an unintended
implementation artifact, but it is relied upon even in stdlib.  See
http://bugs.python.org/issue6280#msg107808 .


From tjreedy at udel.edu  Wed Jun 16 20:56:14 2010
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 16 Jun 2010 14:56:14 -0400
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <4C18A078.8060001@egenix.com>
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>	<20100615230102.GA8439@cskk.homeip.net>	<AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com>	<20100616114544.14696040@pitrou.net>
	<4C18A078.8060001@egenix.com>
Message-ID: <hvb6of$tlk$1@dough.gmane.org>

On 6/16/2010 5:59 AM, M.-A. Lemburg wrote:

> We've done the stdlib reorg already, now it's time to focus on
> improving what's there, not removing things.

I believe deprecating (in the docs, at least) confusing near duplicates 
would be improvement for newcomers.




From solipsis at pitrou.net  Wed Jun 16 21:01:26 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Wed, 16 Jun 2010 21:01:26 +0200
Subject: [Python-ideas] Moving calendar.timegm() to time module Was:
 Rename time module to "posixtime"
References: <AANLkTineFpZmlF7IkhNJpzoMlS7Vhsku8SE79t36K9qL@mail.gmail.com>
Message-ID: <20100616210126.4a40a1db@pitrou.net>

On Wed, 16 Jun 2010 14:14:46 -0400
Alexander Belopolsky
<alexander.belopolsky at gmail.com> wrote:
> 
> This is discussed in Issue 6280 <http://bugs.python.org/issue6280>.
> There are several issues with this proposal:
> 
> 1. According to help(time),
> 
> """
>     The Epoch is system-defined; on Unix, it is generally January 1st, 1970.

What does *generally* mean? Are there, practically, other systems where
the epoch is another reference?





From mal at egenix.com  Wed Jun 16 21:49:18 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Wed, 16 Jun 2010 21:49:18 +0200
Subject: [Python-ideas] Moving calendar.timegm() to time module Was:
 Rename time module to "posixtime"
In-Reply-To: <AANLkTineFpZmlF7IkhNJpzoMlS7Vhsku8SE79t36K9qL@mail.gmail.com>
References: <AANLkTineFpZmlF7IkhNJpzoMlS7Vhsku8SE79t36K9qL@mail.gmail.com>
Message-ID: <4C192ABE.3000000@egenix.com>

Alexander Belopolsky wrote:
> On Wed, Jun 16, 2010 at 1:37 PM, Brett Cannon <brett at python.org> wrote:
> ..
>>> The only improvement I could see, would be to move
>>> calendar.timegm() to the time module, since that's where
>>> it belongs (keeping an alias in the calendar module, of
>>> course).
>>
>> That should definitely happen at some point.
>>
> 
> This is discussed in Issue 6280 <http://bugs.python.org/issue6280>.
> There are several issues with this proposal:
> 
> 1. According to help(time),
> 
> """
>     The Epoch is system-defined; on Unix, it is generally January 1st, 1970.
>     The actual value can be retrieved by calling gmtime(0).
> """
> 
> Current calendar.gmtime implementation ignores this.  The solution to
> this, may be to change help(time), though.
> 
> 2. Current calendar.gmtime supports float values for hours, minutes
> and seconds in timedelta tuple.  This is probably an unintended
> implementation artifact, but it is relied upon even in stdlib.  See
> http://bugs.python.org/issue6280#msg107808 .

I think you have this mixed up: I was suggesting to
move calendar.timegm() to the time module, not
the non existing calendar.gmtime() :-)

If you're looking for a portable implementation in C that doesn't
mess with TZ hacks, have a look at the mxDateTime sources.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 16 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                32 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From mal at egenix.com  Wed Jun 16 22:31:54 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Wed, 16 Jun 2010 22:31:54 +0200
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <hvb6of$tlk$1@dough.gmane.org>
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>	<20100615230102.GA8439@cskk.homeip.net>	<AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com>	<20100616114544.14696040@pitrou.net>	<4C18A078.8060001@egenix.com>
	<hvb6of$tlk$1@dough.gmane.org>
Message-ID: <4C1934BA.4010005@egenix.com>

Terry Reedy wrote:
> On 6/16/2010 5:59 AM, M.-A. Lemburg wrote:
> 
>> We've done the stdlib reorg already, now it's time to focus on
>> improving what's there, not removing things.
> 
> I believe deprecating (in the docs, at least) confusing near duplicates
> would be improvement for newcomers.

Agreed. It would be useful to add a note to the time module
docs pointing newbies directly to the datetime module. For experts,
the time module is still very useful to have around.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 16 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                32 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From ncoghlan at gmail.com  Wed Jun 16 23:30:44 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Thu, 17 Jun 2010 07:30:44 +1000
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <4C1934BA.4010005@egenix.com>
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com>
	<20100615230102.GA8439@cskk.homeip.net>
	<AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com>
	<20100616114544.14696040@pitrou.net> <4C18A078.8060001@egenix.com>
	<hvb6of$tlk$1@dough.gmane.org> <4C1934BA.4010005@egenix.com>
Message-ID: <AANLkTik8tN3cM12pq_O6c6QjsIw7dhwSuBEzdZKaABut@mail.gmail.com>

On Thu, Jun 17, 2010 at 6:31 AM, M.-A. Lemburg <mal at egenix.com> wrote:
> Terry Reedy wrote:
>> I believe deprecating (in the docs, at least) confusing near duplicates
>> would be improvement for newcomers.
>
> Agreed. It would be useful to add a note to the time module
> docs pointing newbies directly to the datetime module. For experts,
> the time module is still very useful to have around.

For myself, I think a long term plan (i.e. Py4k'ish) to move to a
_posixtime/posixtime C/Python hybrid implementation for the POSIX
timestamp manipulation would be beneficial. Largely, as Alexander
points out, to make the distinction between the time module, the
time.time function and datetime.time objects significantly clearer
than it is now:

>>> import time as time1
>>> from time import time as time2
>>> from datetime import time as time3
>>> time1
<module 'time' (built-in)>
>>> time2
<built-in function time>
>>> time3
<type 'datetime.time'>

We can at least cut down the naming conflict to only exist between the
latter two items.

Such a transition could be made in the documentation (with a note in
the "posixtime" documentation to say that it remains available as the
time module for backwards compatibility reasons)  as early as 3.2.

Of course, any functionality gaps identified in datetime would still
need to be closed.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia


From bruce at leapyear.org  Thu Jun 17 07:01:16 2010
From: bruce at leapyear.org (Bruce Leban)
Date: Wed, 16 Jun 2010 22:01:16 -0700
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <AANLkTikVimsQFKGFoY7VC5uW2sHciFYpf4b1CiMrpfJm@mail.gmail.com>
References: <AANLkTikG0VQnq-zwKSByW2rqc4_w9l1oWIFdS2QNmOLw@mail.gmail.com> 
	<20100615230102.GA8439@cskk.homeip.net>
	<AANLkTikSDQvIo_t4SlInlpOf9zUjTp65HscKl_WKZjhY@mail.gmail.com> 
	<4C188398.9010706@egenix.com>
	<AANLkTinQIrn-Z6Q2eLz4Gv-yL4nzD_hCrhMDls7PZ0vF@mail.gmail.com> 
	<AANLkTin7_yb_QfL8OuUa1t7x6C0AWbP3RPicaxu9hDu8@mail.gmail.com> 
	<AANLkTikVimsQFKGFoY7VC5uW2sHciFYpf4b1CiMrpfJm@mail.gmail.com>
Message-ID: <AANLkTik458NZIkxaJZAGnWEjXpKL_9f7PjywGl8WuArQ@mail.gmail.com>

OK, let me revise that. There were lots of different things discussed on
this thread. I agree that

    import time
confounds with
    from datetime import time

and it would be nice to fix that. An alias would be reasonable. Change the
documentation to refer to the new name and leave the old name for legacy
apps. I know TOOWTDI but if the time module has two names until python 4 is
that a major problem?

I'm not in favor of moving things around among the modules etc.

When you say "And where in the docs would you explain the following: :-)"
that sounds like you're saying "this is too confusing we shouldn't document
it." To which I can only say :-(

--- Bruce


On Wed, Jun 16, 2010 at 8:25 AM, Alexander Belopolsky <
alexander.belopolsky at gmail.com> wrote:

> On Wed, Jun 16, 2010 at 10:48 AM, Bruce Leban <bruce at leapyear.org> wrote:
> > -1 to moving anything
> >
>
> I am getting a feeling that you are attacking a strawman.  Deprecating
> time module in favor of posixtime is only the third part of my
> proposal and I don't insist on any particular depreciation schedule.
> All I want is to give users one obvious way to avoid conflict between
> time and datetime.time.  Note that since datetime only defines a
> handful of module level symbols, it is quite common to see from
> datetime import date, datetime, time and it is quite confusing when
> this conflicts with import time.
>
> > The situation is confusing and moving things will add to that confusion
> for
> > a significant length of time.
> >
> Let's be constructive.  What specifically do you find confusing?  Do
> you agree with the list of confusing things that I listed in my
> previous posts?
>
> > What I would instead suggest is improving the docs. If I could look in
> one
> > place to find any time function it would mitigate the fact that they're
> > implemented in multiple places.
>
> I think having datetime.datetime.strftime and time.strftime documented
> in one place will not help anyone.  And I am not even mentioning
> datetime.time.strftime which is of course not the same as
> time.strftime and that for {date,datetime,time}.strftime function you
> need to specify date/time object first and format last but for time
> module strftime it is the other way around. Etc. etc.
>
> I think most users will be happier not knowing about time module
> strftime function.  Even better not knowing about strftime at all and
> using "..".format(dt) instead.
>
> And where in the docs would you explain the following: :-)
>
> >>> from datetime import datetime
> >>> import time
> >>> time.strftime("%c %z %Z", datetime.utcnow().utctimetuple())
> 'Wed Jun 16 15:22:15 2010 -0500 EST'
>
> (Note utc in datetime calls and EST in time.strftime output.)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100616/0f1e29d1/attachment.html>

From alexander.belopolsky at gmail.com  Thu Jun 17 15:38:02 2010
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Thu, 17 Jun 2010 09:38:02 -0400
Subject: [Python-ideas] Issue9004 Was:Rename time module to "posixtime"
Message-ID: <AANLkTilX0IKYXlPyEJM6z-USxwd18aqHzHTtecbuGqpO@mail.gmail.com>

On Thu, Jun 17, 2010 at 1:01 AM, Bruce Leban <bruce at leapyear.org> wrote:
..
> When you say "And where in the docs would you explain the following: :-)"
> that sounds like you're saying "this is too confusing we shouldn't document
> it." To which I can only say :-(

I presented what I consider to be a bug.  I opened an issue 9004, [1]
"datetime.utctimetuple() should not set tm_isdst flag to 0" for that.

There is no point in documenting the following as expected behavior:

>>> time.strftime('%c %z %Z', datetime.utcnow().utctimetuple())
'Wed Jun 16 03:26:26 2010 -0500 EST'

I believe it is better to fix it so that it produces

>>> time.strftime('%c %z %Z', datetime.utcnow().utctimetuple())
'Wed Jun 16 03:26:26 2010  '

instead.

This, however shows limitation of datetime to timetuple conversion:
there is currently no mechanism to store daylight saving time info in
datetime object.  See issue 9013. [2]  Rather than fixing that, it
would be much better to eliminate need for datetime to timetuple
conversion in the first place.


[1] http://bugs.python.org/issue9004
[2] http://bugs.python.org/issue9013


From cs at zip.com.au  Fri Jun 18 00:50:28 2010
From: cs at zip.com.au (Cameron Simpson)
Date: Fri, 18 Jun 2010 08:50:28 +1000
Subject: [Python-ideas] Rename time module to "posixtime"
In-Reply-To: <AANLkTik99USbgzSSRcdgqNNQdqoZcvVu5BRridokB7cC@mail.gmail.com>
References: <AANLkTik99USbgzSSRcdgqNNQdqoZcvVu5BRridokB7cC@mail.gmail.com>
Message-ID: <20100617225028.GA19332@cskk.homeip.net>

On 16Jun2010 10:37, Brett Cannon <brett at python.org> wrote:
| On Wed, Jun 16, 2010 at 00:56, M.-A. Lemburg <mal at egenix.com> wrote:
| > Brett Cannon wrote:
| >> On Tue, Jun 15, 2010 at 16:01, Cameron Simpson <cs at zip.com.au> wrote:
| >>> On 15Jun2010 10:47, Alexander Belopolsky <alexander.belopolsky at gmail.com> wrote:
| >>> | I've reread my post and have to admit that I did not explain this
| >>> | point clearly. ?There are currently three different ways to represent
| >>> | a point in time: datetime object, unix timestamp, and a 9-element time
| >>> | tuple. ?While the datetime module has its share of criticism, its
| >>> | interfaces are more user friendly and more "pythonic" than C inspired
| >>> | time module interfaces.
| >>>
| >>> Personally, I would be happy to see unix-timestamp and datetime object,
| >>> and see the time tuples go away.
[...]
| >> I agree with this sentiment. The UNIX timestamp stuff should stay in
| >> time, the time tuple stuff should just go, and datetime should be
| >> fleshed out to handle all the stuff that is not a direct wrapping
| >> around libc.
[...]
| > -1.
| > Please note that the time module provides access to low-level OS
| > provided services which the datetime module does not expose.
| > You cannot seriously expect that an application which happily uses
| > the time module (only) for its limited date/time functionality
| > to have to be rewritten just to stay compatible with Python.
| 
| No, but the work to move people off of time tuples and over to
| datetime objects or timestamps can start so that the next stdlib reorg
| can drop time tuples without causing major pains.

"I agree with this sentiment." :-)

I, also, was insufficiently clear. I don't want any code to break, and
Alexander's proposal describes a non-breaking approach. I would like my
earlier statement to be read as wanting it to be possible to work with
unixtimes and datetimes and never need to use a time tuple, and for the
documentation to direct users to datetimes and unixtimes as the obvious
and sufficent way to do things.

[...]
| I don't care as much about the rename as I do about losing time tuples
| in the long run.
| 
| > The only improvement I could see, would be to move
| > calendar.timegm() to the time module, since that's where
| > it belongs (keeping an alias in the calendar module, of
| > course).
| 
| That should definitely happen at some point.

+1 to the above too. That the "Use the following functions to convert
between time representations" table near the top of the "time" module
documentation reaches for the calendar module grates.

Cheers,
-- 
Cameron Simpson <cs at zip.com.au> DoD#743
http://www.cskk.ezoshosting.com/cs/

In any event, this is a straw herring for debate.
        - solovay at netcom.com (Andrew Solovay)


From danieldelay at gmail.com  Fri Jun 25 21:08:31 2010
From: danieldelay at gmail.com (Daniel DELAY)
Date: Fri, 25 Jun 2010 21:08:31 +0200
Subject: [Python-ideas] explicitation lines in python ?
Message-ID: <4C24FEAF.4030304@gmail.com>

If we could  explicitate a too complex expression in an indented next 
line, I would use this feature very often :

htmltable = ''.join( '<tr>{}</tr>'.format(htmlline) for line in table) 
:    # main line
     htmlline : ''.join( '<td>{}</td>'.format(cell) for cell in 
line)     # explicitation(s) line(s)


(Sorry if this has already been discussed earlier on this list, I have 
not read all the archives)

*******

in details :

List comprehension  "<expression> for x in mylist"  often greatly 
improve readability of python programs, when <expression> is not too complex
When <expression> is too complex (ex: nested lists), this become not 
readable, so we have to find another solution  :
a) defining a function expression(x), or an iterator function, wich will 
only used once in the code
b) or droping this beautiful syntax to replace it the very basic list 
construction :
newlist = []
for x in myiterable
     newlist.append(<expression>)

I often choose b), but I dislike both solutions :
- in solution a) function def can be far from list comprehension, in 
fact instructions to build the new list are split in two different 
places in the code.
- solution b) seems a bit better to me, but the fact we build a new list 
from myiterable is not visible in a glance, unlike list comprehensions.

Renouncing to list comprehension occurs rather often when I write python 
code

I think we could greatly improve readability if we could keep list 
comprehension anywhere in all cases, but when necessary explicit a too 
complex expression in an indented line :

htmltable = ''.join( '<tr>{}</tr>'.format(htmlline) for line in table) 
:    # main line
     htmlline : ''.join( '<td>{}</td>'.format(cell) for cell in 
line)       # explicitation(s) line(s)

In the case the main line is the header of a "classical" indented block 
(starting with "for", "if", "with"...) , this idented block would simply 
follow the explicitation(s) line(s).
The explicitations lines can be surely identified as the lines than 
begin with "identifier :"  (when we are not in an unclosed dict)

with open('data.txt') as f :
     if line in enumerate(mylist) :  # main line
         mylist : f.read().strip().lower()    # explicitation(s) line(s)
         print line    # "classical" indented block

Another possible use of "explicitations lines" is a coding style wich 
would start by "the whole picture" first, and completing with details 
after, wich is the usual way we mentally solve problems.

Let's take an example : we want to write a function wich returns a 
multiplication table in a simle html document.
When I solve this problem, il think a bit like that :
- I need to return an html page. For that I need a "header" and a 
"body". My body will contain an "htmltable", wich be built from a 
"table" of numbers etc.

My code could look like that :

def tag(content, *tags): # little convenient function
     retval = content
     for t in tags:
         retval =  '<{0}>{1}</{0}>'.format(t, retval)
     return retval

def xhtml_mult_table(a, b):
     return tag(header + body, 'html') :
         header :  tag('multiplication table', 'title')
         body : tag(htmltable, 'tbody', 'table', 'body') :
             htmltable : ''.join(tag(xhtmlline, 'tr') for line in table) :
                 table : headerline + otherlines :
                     headerline : [[''] + range(a)]
                     otherlines : [[y] + [x*y for x in range(a)]  for y 
in range(b)]
                 xhtmlline : ''.join(tag(str(cell), 'td') for cell in line)

This example is a "heavy use" of the "explicitation line" feature, to 
illustrate how it could work.

I don't mean this should replace the "classical" syntax everywhere 
possible, but for me this would be for me a nice way to explicitate 
complex expressions from time to time, and the ability to use list 
comprehension everywhere I wan't.

Daniel



From danieldelay at gmail.com  Sat Jun 26 04:25:09 2010
From: danieldelay at gmail.com (Daniel DELAY)
Date: Sat, 26 Jun 2010 04:25:09 +0200
Subject: [Python-ideas] explicitation lines in python ?
In-Reply-To: <4C24FEAF.4030304@gmail.com>
References: <4C24FEAF.4030304@gmail.com>
Message-ID: <4C256505.9000703@gmail.com>

Le 25/06/2010 21:08, Daniel DELAY a ?crit :
> with open('data.txt') as f :
>     if line in enumerate(mylist) :  # main line
>         mylist : f.read().strip().lower()    # explicitation(s) line(s)
>         print line    # "classical" indented block
>
oups sorry I meant something like :

with open('data.txt') as f :
     for i, line in enumerate(mylist) :  # main line
         mylist : f.read().split('\n')    # explicitation(s) line(s)
         print(i, line)    # "classical" indented block


From guido at python.org  Sat Jun 26 04:36:15 2010
From: guido at python.org (Guido van Rossum)
Date: Fri, 25 Jun 2010 19:36:15 -0700
Subject: [Python-ideas] explicitation lines in python ?
In-Reply-To: <4C256505.9000703@gmail.com>
References: <4C24FEAF.4030304@gmail.com> <4C256505.9000703@gmail.com>
Message-ID: <AANLkTimV6aM2KN6x3l0iee-n8xR3SOEMI6j9KFh8Tgo8@mail.gmail.com>

On Fri, Jun 25, 2010 at 7:25 PM, Daniel DELAY <danieldelay at gmail.com> wrote:
> Le 25/06/2010 21:08, Daniel DELAY a ?crit :
>>
>> with open('data.txt') as f :
>> ? ?if line in enumerate(mylist) : ?# main line
>> ? ? ? ?mylist : f.read().strip().lower() ? ?# explicitation(s) line(s)
>> ? ? ? ?print line ? ?# "classical" indented block
>>
> oups sorry I meant something like :
>
> with open('data.txt') as f :
> ? ?for i, line in enumerate(mylist) : ?# main line
> ? ? ? ?mylist : f.read().split('\n') ? ?# explicitation(s) line(s)
> ? ? ? ?print(i, line) ? ?# "classical" indented block

I don't know where you got the word "explicitation" -- I've never
heard of it. (Maybe it's French? You sound French. :-) However, this
feature existed in ABC under the name "refinement". See
http://homepages.cwi.nl/~steven/abc/qr.html#Refinements

-- 
--Guido van Rossum (python.org/~guido)


From ghazel at gmail.com  Sat Jun 26 04:48:08 2010
From: ghazel at gmail.com (ghazel at gmail.com)
Date: Fri, 25 Jun 2010 19:48:08 -0700
Subject: [Python-ideas] feature to make traceback objects usable without
	references to frame locals and globals
Message-ID: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>

Hi,

I'm interested in a feature which allows users to discard the locals
and globals references from frames held by a traceback object.

Currently, traceback objects are used when capturing and re-raising
exceptions. However, they hold a reference to all frames, which hold a
reference to their locals and globals. These are not needed by the
default traceback output, and can cause serious memory bloat if a
reference to a traceback object is kept for any significant length of
time, and there are even big red warnings in the Python docs about
using them in one frame. (
http://docs.python.org/release/3.1/library/sys.html#sys.exc_info ).

Example usage would be something like:

import sys
try:
    1/0
except:
    t, v, tb = sys.exc_info()
    tb.clean()
# ... much later ...
raise t, v, tb


Which would be basically a function to do this:

import sys
try:
    1/0
except:
    t, v, tb = sys.exc_info()
    c = tb
    while c:
        c.tb_frame.f_locals = None
        c.tb_frame.f_globals = None
        c = c.tb_next
# ... much later ...
raise t, v, tb


Twisted has done a very similar thing with their
twisted.python.failure.Failure object, which stringifies the traceback
data and discards the reference to the Python traceback entirely (
http://twistedmatrix.com/trac/browser/tags/releases/twisted-10.0.0/twisted/python/failure.py#L437
) - they also replicate a lot of traceback printing functions to make
use of this stringified data.

It's worth noting that cgitb and other applications make use of locals
and globals in its traceback output. However, I believe the vast
majority of traceback usage does not make use of these references, and
a significant penalty is paid as a result.

Is there any interest in such a feature?


-Greg


From guido at python.org  Sat Jun 26 04:58:38 2010
From: guido at python.org (Guido van Rossum)
Date: Fri, 25 Jun 2010 19:58:38 -0700
Subject: [Python-ideas] feature to make traceback objects usable without
	references to frame locals and globals
In-Reply-To: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
Message-ID: <AANLkTinPcwW63hlJpOemfBiRbsFPDse-Dyzd71NCy6mr@mail.gmail.com>

Do you have profiling data to support your claim?

On Fri, Jun 25, 2010 at 7:48 PM,  <ghazel at gmail.com> wrote:
> Hi,
>
> I'm interested in a feature which allows users to discard the locals
> and globals references from frames held by a traceback object.
>
> Currently, traceback objects are used when capturing and re-raising
> exceptions. However, they hold a reference to all frames, which hold a
> reference to their locals and globals. These are not needed by the
> default traceback output, and can cause serious memory bloat if a
> reference to a traceback object is kept for any significant length of
> time, and there are even big red warnings in the Python docs about
> using them in one frame. (
> http://docs.python.org/release/3.1/library/sys.html#sys.exc_info ).
>
> Example usage would be something like:
>
> import sys
> try:
> ? ?1/0
> except:
> ? ?t, v, tb = sys.exc_info()
> ? ?tb.clean()
> # ... much later ...
> raise t, v, tb
>
>
> Which would be basically a function to do this:
>
> import sys
> try:
> ? ?1/0
> except:
> ? ?t, v, tb = sys.exc_info()
> ? ?c = tb
> ? ?while c:
> ? ? ? ?c.tb_frame.f_locals = None
> ? ? ? ?c.tb_frame.f_globals = None
> ? ? ? ?c = c.tb_next
> # ... much later ...
> raise t, v, tb
>
>
> Twisted has done a very similar thing with their
> twisted.python.failure.Failure object, which stringifies the traceback
> data and discards the reference to the Python traceback entirely (
> http://twistedmatrix.com/trac/browser/tags/releases/twisted-10.0.0/twisted/python/failure.py#L437
> ) - they also replicate a lot of traceback printing functions to make
> use of this stringified data.
>
> It's worth noting that cgitb and other applications make use of locals
> and globals in its traceback output. However, I believe the vast
> majority of traceback usage does not make use of these references, and
> a significant penalty is paid as a result.
>
> Is there any interest in such a feature?
>
>
> -Greg
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> http://mail.python.org/mailman/listinfo/python-ideas
>



-- 
--Guido van Rossum (python.org/~guido)


From stephen at xemacs.org  Sat Jun 26 04:58:35 2010
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Sat, 26 Jun 2010 11:58:35 +0900
Subject: [Python-ideas]  explicitation lines in python ?
In-Reply-To: <4C24FEAF.4030304@gmail.com>
References: <4C24FEAF.4030304@gmail.com>
Message-ID: <87pqzemrdw.fsf@uwakimon.sk.tsukuba.ac.jp>

Daniel DELAY writes:

 > (Sorry if this has already been discussed earlier on this list, I have 
 > not read all the archives)

I think if you search for "first-class blocks" and "lambdas", or
similar, you'll find related discussion (although not exactly the same
thing).  It also looks very similar to the Haskell "where", maybe
searching for "Haskell where" would bring it up.

 > Renouncing to list comprehension occurs rather often when I write python 
 > code
 > 
 > I think we could greatly improve readability if we could keep list 
 > comprehension anywhere in all cases, but when necessary explicit a too 
 > complex expression in an indented line :
 > 
 > htmltable = ''.join( '<tr>{}</tr>'.format(htmlline) for line in table):
 >     htmlline : ''.join( '<td>{}</td>'.format(cell) for cell in line)

(Edited for readability; it was munged by your mail client. ;-)

I'm not sure I like this better than the alternative of rewriting the
outer loops explicitly.  But if you're going to add syntax, I think
the more verbose

    htmltable = ''.join('<tr>{}</tr>'.format(htmlline) for line in table) \
        with htmlline = ''.join('<td>{}</td>'.format(cell) for cell in line)

looks better.  Note that the "with" clause becomes an optional part of
an assignment statement rather than a suite controlled by the
assignment, and the indentation is decorative rather than syntactic.
I considered "as" instead of "=" in the with clause, but preferred the
"=" because that allows nested "with" in a natural way.  (Maybe, I
haven't thought carefully about that at all.)  Obviously "with" was
chosen because it's already a keyword.

I suspect this has been shot down before, though.


From ncoghlan at gmail.com  Sat Jun 26 05:14:31 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sat, 26 Jun 2010 13:14:31 +1000
Subject: [Python-ideas] explicitation lines in python ?
In-Reply-To: <87pqzemrdw.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <4C24FEAF.4030304@gmail.com>
	<87pqzemrdw.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <AANLkTim4mFyZkX7KnMXjvRfAf9kveCG5n7XZV-ddRlSx@mail.gmail.com>

> I suspect this has been shot down before, though.

Not so much shot down, as "never found a syntax and semantics that
were sufficiently clear".

Looking up 'statement local namespaces' for Python brings up some old
discussions of the idea:
http://www.mail-archive.com/python-list at python.org/msg07034.html

(the 'new' compiler in that message is the AST compiler adopted in
Python 2.5. The 'nonlocal' keyword now gives us more options in
deciding how to handle assignment statements)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia


From pyideas at rebertia.com  Sat Jun 26 05:26:58 2010
From: pyideas at rebertia.com (Chris Rebert)
Date: Fri, 25 Jun 2010 20:26:58 -0700
Subject: [Python-ideas] explicitation lines in python ?
In-Reply-To: <4C24FEAF.4030304@gmail.com>
References: <4C24FEAF.4030304@gmail.com>
Message-ID: <AANLkTikV18QVu8bKJC2XeVtYNJ02MJF0cMGibLIg1XmX@mail.gmail.com>

On Fri, Jun 25, 2010 at 12:08 PM, Daniel DELAY <danieldelay at gmail.com> wrote:
> If we could ?explicitate

That's not a word.

\> a too complex expression in an indented next line,
> I would use this feature very often :
>
> htmltable = ''.join( '<tr>{}</tr>'.format(htmlline) for line in table) :
> ?# main line
> ? ?htmlline : ''.join( '<td>{}</td>'.format(cell) for cell in line) ? ? #
> explicitation(s) line(s)

Again, not a word, and not a great name for this either, IMO.

> in details :
>
> List comprehension ?"<expression> for x in mylist" ?often greatly improve
> readability of python programs, when <expression> is not too complex
> When <expression> is too complex (ex: nested lists), this become not
> readable, so we have to find another solution ?:
> a) defining a function expression(x), or an iterator function, wich will
> only used once in the code
> b) or droping this beautiful syntax to replace it the very basic list
> construction :
> newlist = []
> for x in myiterable
> ? ?newlist.append(<expression>)
>
> I often choose b), but I dislike both solutions :
> - in solution a) function def can be far from list comprehension, in fact
> instructions to build the new list are split in two different places in the
> code.

What do you mean, you can put them right next to each other, and even
better, give the expression a meaningful name:

def line2html(line):
    return ''.join( '<td>{}</td>'.format(cell) for cell in line)
htmltable = ''.join( '<tr>{}</tr>'.format(line2html(line)) for line in table)

> - solution b) seems a bit better to me,

I'm gonna disagree with you there, but it is a somewhat subjective
stylistic issue.

> but the fact we build a new list
> from myiterable is not visible in a glance, unlike list comprehensions.
>
> Renouncing to list comprehension occurs rather often when I write python
> code
>
> I think we could greatly improve readability if we could keep list
> comprehension anywhere in all cases, but when necessary explicit a too
> complex expression in an indented line :
>
> htmltable = ''.join( '<tr>{}</tr>'.format(htmlline) for line in table) :
> ?# main line
> ? ?htmlline : ''.join( '<td>{}</td>'.format(cell) for cell in line) ? ? ? #
> explicitation(s) line(s)
>
> In the case the main line is the header of a "classical" indented block
> (starting with "for", "if", "with"...) , this idented block would simply
> follow the explicitation(s) line(s).
> The explicitations lines can be surely identified as the lines than begin
> with "identifier :" ?(when we are not in an unclosed dict)
>
> with open('data.txt') as f :
> ? ?if line in enumerate(mylist) : ?# main line
> ? ? ? ?mylist : f.read().strip().lower() ? ?# explicitation(s) line(s)
> ? ? ? ?print line ? ?# "classical" indented block
>
> Another possible use of "explicitations lines" is a coding style wich would
> start by "the whole picture" first, and completing with details after, wich
> is the usual way we mentally solve problems.

In other words, "where" clauses, ? la Haskell (see
http://www.haskell.org/tutorial/patterns.html section 4.5); just tweak
the syntax from

expr_involving_bar :
    bar : expr

to

expr_involving_bar where:
    bar = expr

which avoids overloading colons further (i.e. constipation ;-P) and
the equals sign makes more sense anyway.

Having used Haskell a little bit, I can say "where" clauses can indeed
make some code easier to read. However, adding them to an imperative
language like Python is more problematic, since order of evaluation
matters and it complicates the flow of control by causing it to go
backward in "where" clauses.

Basically, I don't see the problem with solution "(a)"; your general
idea isn't without merit though.

Cheers,
Chris
--
http://blog.rebertia.com


From pyideas at rebertia.com  Sat Jun 26 05:55:54 2010
From: pyideas at rebertia.com (Chris Rebert)
Date: Fri, 25 Jun 2010 20:55:54 -0700
Subject: [Python-ideas] explicitation lines in python ?
In-Reply-To: <87pqzemrdw.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <4C24FEAF.4030304@gmail.com>
	<87pqzemrdw.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <AANLkTik00ygRrEGuvXnICieI8PLj_4eAfzCbJrW9I0wQ@mail.gmail.com>

On Fri, Jun 25, 2010 at 7:58 PM, Stephen J. Turnbull <stephen at xemacs.org> wrote:
> Daniel DELAY writes:
>
> ?> (Sorry if this has already been discussed earlier on this list, I have
> ?> not read all the archives)
>
> I think if you search for "first-class blocks" and "lambdas", or
> similar, you'll find related discussion (although not exactly the same
> thing). ?It also looks very similar to the Haskell "where", maybe
> searching for "Haskell where" would bring it up.
>
> ?> Renouncing to list comprehension occurs rather often when I write python
> ?> code
> ?>
> ?> I think we could greatly improve readability if we could keep list
> ?> comprehension anywhere in all cases, but when necessary explicit a too
> ?> complex expression in an indented line :
> ?>
> ?> htmltable = ''.join( '<tr>{}</tr>'.format(htmlline) for line in table):
> ?> ? ? htmlline : ''.join( '<td>{}</td>'.format(cell) for cell in line)
>
> (Edited for readability; it was munged by your mail client. ;-)
>
> I'm not sure I like this better than the alternative of rewriting the
> outer loops explicitly. ?But if you're going to add syntax, I think
> the more verbose
>
> ? ?htmltable = ''.join('<tr>{}</tr>'.format(htmlline) for line in table) \
> ? ? ? ?with htmlline = ''.join('<td>{}</td>'.format(cell) for cell in line)
>
> looks better. ?Note that the "with" clause becomes an optional part of
> an assignment statement rather than a suite controlled by the
> assignment, and the indentation is decorative rather than syntactic.
> I considered "as" instead of "=" in the with clause, but preferred the
> "=" because that allows nested "with" in a natural way. ?(Maybe, I
> haven't thought carefully about that at all.) ?Obviously "with" was
> chosen because it's already a keyword.
>
> I suspect this has been shot down before, though.

Prior thread:
[Python-ideas] Where-statement (Proposal for function expressions)
http://mail.python.org/pipermail/python-ideas/2009-July/005114.html

There certainly was criticism:
http://mail.python.org/pipermail/python-ideas/2009-July/005213.html

However, the BDFL seemed receptive:
http://mail.python.org/pipermail/python-ideas/2009-July/005299.html

Cheers,
Chris
--
http://blog.rebertia.com


From ghazel at gmail.com  Sat Jun 26 07:10:24 2010
From: ghazel at gmail.com (ghazel at gmail.com)
Date: Fri, 25 Jun 2010 22:10:24 -0700
Subject: [Python-ideas] feature to make traceback objects usable without
	references to frame locals and globals
In-Reply-To: <AANLkTinPcwW63hlJpOemfBiRbsFPDse-Dyzd71NCy6mr@mail.gmail.com>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com> 
	<AANLkTinPcwW63hlJpOemfBiRbsFPDse-Dyzd71NCy6mr@mail.gmail.com>
Message-ID: <AANLkTimNrSwqTmkpSU9vpUTYG7SvLPM8OEeJqdmKbu7L@mail.gmail.com>

Well, I discovered this property of traceback objects when a
real-world server of mine began eating all the memory on the server.
To me, this is the most convincing reason to address the issue.

I'm not sure what sort of profiling you're looking for, but I have
since then produced a contrived example which demonstrates a serious
memory consumption difference with a very short traceback object
lifetime: http://codepad.org/F23cwezb

If you run the test with "s.e = sys.exc_info()" commented out, the
observed memory footprint of the process quickly approaches and sits
at 5,677,056 bytes. Totally reasonable.

If you uncomment that line, the memory footprint climbs to 283,316,224
bytes quite rapidly. That's a two order of magnitude difference!

If you uncomment the "gc.collect()" line, the process still hits
148,910,080 bytes.


-Greg


On Fri, Jun 25, 2010 at 7:58 PM, Guido van Rossum <guido at python.org> wrote:
> Do you have profiling data to support your claim?
>
> On Fri, Jun 25, 2010 at 7:48 PM, ?<ghazel at gmail.com> wrote:
>> Hi,
>>
>> I'm interested in a feature which allows users to discard the locals
>> and globals references from frames held by a traceback object.
>>
>> Currently, traceback objects are used when capturing and re-raising
>> exceptions. However, they hold a reference to all frames, which hold a
>> reference to their locals and globals. These are not needed by the
>> default traceback output, and can cause serious memory bloat if a
>> reference to a traceback object is kept for any significant length of
>> time, and there are even big red warnings in the Python docs about
>> using them in one frame. (
>> http://docs.python.org/release/3.1/library/sys.html#sys.exc_info ).
>>
>> Example usage would be something like:
>>
>> import sys
>> try:
>> ? ?1/0
>> except:
>> ? ?t, v, tb = sys.exc_info()
>> ? ?tb.clean()
>> # ... much later ...
>> raise t, v, tb
>>
>>
>> Which would be basically a function to do this:
>>
>> import sys
>> try:
>> ? ?1/0
>> except:
>> ? ?t, v, tb = sys.exc_info()
>> ? ?c = tb
>> ? ?while c:
>> ? ? ? ?c.tb_frame.f_locals = None
>> ? ? ? ?c.tb_frame.f_globals = None
>> ? ? ? ?c = c.tb_next
>> # ... much later ...
>> raise t, v, tb
>>
>>
>> Twisted has done a very similar thing with their
>> twisted.python.failure.Failure object, which stringifies the traceback
>> data and discards the reference to the Python traceback entirely (
>> http://twistedmatrix.com/trac/browser/tags/releases/twisted-10.0.0/twisted/python/failure.py#L437
>> ) - they also replicate a lot of traceback printing functions to make
>> use of this stringified data.
>>
>> It's worth noting that cgitb and other applications make use of locals
>> and globals in its traceback output. However, I believe the vast
>> majority of traceback usage does not make use of these references, and
>> a significant penalty is paid as a result.
>>
>> Is there any interest in such a feature?
>>
>>
>> -Greg
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> http://mail.python.org/mailman/listinfo/python-ideas
>>
>
>
>
> --
> --Guido van Rossum (python.org/~guido)
>


From debatem1 at gmail.com  Sat Jun 26 08:48:30 2010
From: debatem1 at gmail.com (geremy condra)
Date: Sat, 26 Jun 2010 02:48:30 -0400
Subject: [Python-ideas] explicitation lines in python ?
In-Reply-To: <AANLkTik00ygRrEGuvXnICieI8PLj_4eAfzCbJrW9I0wQ@mail.gmail.com>
References: <4C24FEAF.4030304@gmail.com>
	<87pqzemrdw.fsf@uwakimon.sk.tsukuba.ac.jp>
	<AANLkTik00ygRrEGuvXnICieI8PLj_4eAfzCbJrW9I0wQ@mail.gmail.com>
Message-ID: <AANLkTileVAv96jp6BaJu52eZ_-iLxk3jS8Hn0QHvMh20@mail.gmail.com>

On Fri, Jun 25, 2010 at 11:55 PM, Chris Rebert <pyideas at rebertia.com> wrote:
> On Fri, Jun 25, 2010 at 7:58 PM, Stephen J. Turnbull <stephen at xemacs.org> wrote:
>> Daniel DELAY writes:
>>
>> ?> (Sorry if this has already been discussed earlier on this list, I have
>> ?> not read all the archives)
>>
>> I think if you search for "first-class blocks" and "lambdas", or
>> similar, you'll find related discussion (although not exactly the same
>> thing). ?It also looks very similar to the Haskell "where", maybe
>> searching for "Haskell where" would bring it up.
>>
>> ?> Renouncing to list comprehension occurs rather often when I write python
>> ?> code
>> ?>
>> ?> I think we could greatly improve readability if we could keep list
>> ?> comprehension anywhere in all cases, but when necessary explicit a too
>> ?> complex expression in an indented line :
>> ?>
>> ?> htmltable = ''.join( '<tr>{}</tr>'.format(htmlline) for line in table):
>> ?> ? ? htmlline : ''.join( '<td>{}</td>'.format(cell) for cell in line)
>>
>> (Edited for readability; it was munged by your mail client. ;-)
>>
>> I'm not sure I like this better than the alternative of rewriting the
>> outer loops explicitly. ?But if you're going to add syntax, I think
>> the more verbose
>>
>> ? ?htmltable = ''.join('<tr>{}</tr>'.format(htmlline) for line in table) \
>> ? ? ? ?with htmlline = ''.join('<td>{}</td>'.format(cell) for cell in line)
>>
>> looks better. ?Note that the "with" clause becomes an optional part of
>> an assignment statement rather than a suite controlled by the
>> assignment, and the indentation is decorative rather than syntactic.
>> I considered "as" instead of "=" in the with clause, but preferred the
>> "=" because that allows nested "with" in a natural way. ?(Maybe, I
>> haven't thought carefully about that at all.) ?Obviously "with" was
>> chosen because it's already a keyword.
>>
>> I suspect this has been shot down before, though.
>
> Prior thread:
> [Python-ideas] Where-statement (Proposal for function expressions)
> http://mail.python.org/pipermail/python-ideas/2009-July/005114.html

I was all set to dislike this syntax, but after reading over it a bit I
find myself liking it a lot. Was the code for this (or similar) ever
written, or was it just proposed?

Geremy Condra


From bruce at leapyear.org  Sat Jun 26 09:43:19 2010
From: bruce at leapyear.org (Bruce Leban)
Date: Sat, 26 Jun 2010 00:43:19 -0700
Subject: [Python-ideas] explicitation lines in python ?
In-Reply-To: <AANLkTileVAv96jp6BaJu52eZ_-iLxk3jS8Hn0QHvMh20@mail.gmail.com>
References: <4C24FEAF.4030304@gmail.com>
	<87pqzemrdw.fsf@uwakimon.sk.tsukuba.ac.jp> 
	<AANLkTik00ygRrEGuvXnICieI8PLj_4eAfzCbJrW9I0wQ@mail.gmail.com> 
	<AANLkTileVAv96jp6BaJu52eZ_-iLxk3jS8Hn0QHvMh20@mail.gmail.com>
Message-ID: <AANLkTilzjvFllHyZwByjL_1h1k5RjqHOpr2StISKAkq5@mail.gmail.com>

I really dislike the idea that when I read an expression I'd have to scan to
the end of the statement to figure out if it's a forward or backward
reference. Is this

    def foo(a, b):
      return x * y:
        x : a + b
        y : a - b

really significantly better than:

    def foo(a, b):
      x = lambda: a + b
      y = lambda: a - b
      return x() * y()

Note that when I see the () there's an explicit marker that x and y are not
simple variables so personally I wouldn't want to "save" those few
characters. So really what you're doing is allowing me to put these in a
different order and saving 7 characters. But I can reorder them easily
enough if I want to:

    def foo(a, b):
      result = lambda: x() * y()
      x = lambda: a + b
      y = lambda: a - b
      return result()

--- Bruce
http://www.vroospeak.com
http://jarlsberg.appspot.com



On Fri, Jun 25, 2010 at 11:48 PM, geremy condra <debatem1 at gmail.com> wrote:

> On Fri, Jun 25, 2010 at 11:55 PM, Chris Rebert <pyideas at rebertia.com>
> wrote:
> > On Fri, Jun 25, 2010 at 7:58 PM, Stephen J. Turnbull <stephen at xemacs.org>
> wrote:
> >> Daniel DELAY writes:
> >>
> >>  > (Sorry if this has already been discussed earlier on this list, I
> have
> >>  > not read all the archives)
> >>
> >> I think if you search for "first-class blocks" and "lambdas", or
> >> similar, you'll find related discussion (although not exactly the same
> >> thing).  It also looks very similar to the Haskell "where", maybe
> >> searching for "Haskell where" would bring it up.
> >>
> >>  > Renouncing to list comprehension occurs rather often when I write
> python
> >>  > code
> >>  >
> >>  > I think we could greatly improve readability if we could keep list
> >>  > comprehension anywhere in all cases, but when necessary explicit a
> too
> >>  > complex expression in an indented line :
> >>  >
> >>  > htmltable = ''.join( '<tr>{}</tr>'.format(htmlline) for line in
> table):
> >>  >     htmlline : ''.join( '<td>{}</td>'.format(cell) for cell in line)
> >>
> >> (Edited for readability; it was munged by your mail client. ;-)
> >>
> >> I'm not sure I like this better than the alternative of rewriting the
> >> outer loops explicitly.  But if you're going to add syntax, I think
> >> the more verbose
> >>
> >>    htmltable = ''.join('<tr>{}</tr>'.format(htmlline) for line in table)
> \
> >>        with htmlline = ''.join('<td>{}</td>'.format(cell) for cell in
> line)
> >>
> >> looks better.  Note that the "with" clause becomes an optional part of
> >> an assignment statement rather than a suite controlled by the
> >> assignment, and the indentation is decorative rather than syntactic.
> >> I considered "as" instead of "=" in the with clause, but preferred the
> >> "=" because that allows nested "with" in a natural way.  (Maybe, I
> >> haven't thought carefully about that at all.)  Obviously "with" was
> >> chosen because it's already a keyword.
> >>
> >> I suspect this has been shot down before, though.
> >
> > Prior thread:
> > [Python-ideas] Where-statement (Proposal for function expressions)
> > http://mail.python.org/pipermail/python-ideas/2009-July/005114.html
>
> I was all set to dislike this syntax, but after reading over it a bit I
> find myself liking it a lot. Was the code for this (or similar) ever
> written, or was it just proposed?
>
> Geremy Condra
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> http://mail.python.org/mailman/listinfo/python-ideas
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100626/7a44658a/attachment.html>

From greg.ewing at canterbury.ac.nz  Sat Jun 26 10:23:52 2010
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Sat, 26 Jun 2010 20:23:52 +1200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
In-Reply-To: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
Message-ID: <4C25B918.8010307@canterbury.ac.nz>

ghazel at gmail.com wrote:

> I'm interested in a feature which allows users to discard the locals
> and globals references from frames held by a traceback object.

I'd like to take this further and remove the need for
traceback objects to refer to a frame object at all.
The standard traceback printout only needs two pieces of
information from the traceback, the file name and line
number.

The line number is already present in the traceback
object. All it would take is the addition of a file name
attribute to the traceback object, and the frame reference
could be made optional.

This would be a big help for Pyrex and Cython, which
currently have to create entire dummy frame objects in order
to add entries to the traceback. Not only is this tedious
and inefficient, it ties them to internal details of the
frame object that are vulnerable to change. It would be
much nicer to have a simple API function such as

   PyTraceback_AddEntry(filename, lineno)

to add a frameless traceback object.

-- 
Greg


From solipsis at pitrou.net  Sat Jun 26 12:04:35 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 26 Jun 2010 12:04:35 +0200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
	<4C25B918.8010307@canterbury.ac.nz>
Message-ID: <20100626120435.7012847e@pitrou.net>

On Sat, 26 Jun 2010 20:23:52 +1200
Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> ghazel at gmail.com wrote:
> 
> > I'm interested in a feature which allows users to discard the locals
> > and globals references from frames held by a traceback object.
> 
> I'd like to take this further and remove the need for
> traceback objects to refer to a frame object at all.
> The standard traceback printout only needs two pieces of
> information from the traceback, the file name and line
> number.

Both ideas seem reasonable, but they need a concrete proposal
and/or a patch.

Regards

Antoine.




From mal at egenix.com  Sat Jun 26 13:03:38 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Sat, 26 Jun 2010 13:03:38 +0200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
In-Reply-To: <4C25B918.8010307@canterbury.ac.nz>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
	<4C25B918.8010307@canterbury.ac.nz>
Message-ID: <4C25DE8A.1030209@egenix.com>

Greg Ewing wrote:
> ghazel at gmail.com wrote:
> 
>> I'm interested in a feature which allows users to discard the locals
>> and globals references from frames held by a traceback object.

Wouldn't it be better to write safer code and not store
a reference to the traceback object in the first place ?

Working with traceback objects can easily introduce hidden
circular references, so it usually better not access them
at all, if you don't have a need for them:

Either like this:

try:
    raise Exception
except Exception, reason:
    pass

or by using slicing:

try:
    raise Exception
except Exception, reason:
    errorclass, errorobject = sys.exc_info()[:2]
    pass

If you do need to access them, make sure you clean up
the reference as soon as you can:

try:
    raise Exception
except Exception, reason:
    errorclass, errorobject, tb = sys.exc_info()
    ...
    tb = None

> I'd like to take this further and remove the need for
> traceback objects to refer to a frame object at all.
> The standard traceback printout only needs two pieces of
> information from the traceback, the file name and line
> number.
> 
> The line number is already present in the traceback
> object. All it would take is the addition of a file name
> attribute to the traceback object, and the frame reference
> could be made optional.

How would you make that reference optional ?

The frames are needed to inspect the locals and globals
of the call stack and debugging code relies on them being
available.

Also: What's the use case for creating traceback objects
outside the Python interpreter core ?

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 26 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                22 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From ghazel at gmail.com  Sat Jun 26 13:35:31 2010
From: ghazel at gmail.com (ghazel at gmail.com)
Date: Sat, 26 Jun 2010 04:35:31 -0700
Subject: [Python-ideas] feature to make traceback objects usable without
	references to frame locals and globals
In-Reply-To: <4C25DE8A.1030209@egenix.com>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com> 
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com>
Message-ID: <AANLkTimZ6T0alJvKFz8HI0l3FmBEi1dErvCH8hrXIvvG@mail.gmail.com>

On Sat, Jun 26, 2010 at 4:03 AM, M.-A. Lemburg <mal at egenix.com> wrote:
> Greg Ewing wrote:
>> I'd like to take this further and remove the need for
>> traceback objects to refer to a frame object at all.
>> The standard traceback printout only needs two pieces of
>> information from the traceback, the file name and line
>> number.

First off, Greg Ewing's idea fully covers my use case and may even
simplify implementation, so I'm in favor it. I have never used (and
very much question the use of) references to locals and globals.
Having some backwards-compatible way to avoid ever having to deal with
them would be preferable.

On Sat, Jun 26, 2010 at 4:03 AM, M.-A. Lemburg <mal at egenix.com> wrote:
> Greg Ewing wrote:
>> ghazel at gmail.com wrote:
>>
>>> I'm interested in a feature which allows users to discard the locals
>>> and globals references from frames held by a traceback object.
>
> Wouldn't it be better to write safer code and not store
> a reference to the traceback object in the first place ?
>
> Working with traceback objects can easily introduce hidden
> circular references, so it usually better not access them
> at all, if you don't have a need for them:

Those are strong words against using traceback objects. This feature
idea is about creating a way to make traceback objects usable without
the gotcha you're referencing.


-Greg


From solipsis at pitrou.net  Sat Jun 26 14:00:31 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sat, 26 Jun 2010 14:00:31 +0200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com>
Message-ID: <20100626140031.6adff16e@pitrou.net>

On Sat, 26 Jun 2010 13:03:38 +0200
"M.-A. Lemburg" <mal at egenix.com> wrote:
> Greg Ewing wrote:
> > ghazel at gmail.com wrote:
> > 
> >> I'm interested in a feature which allows users to discard the locals
> >> and globals references from frames held by a traceback object.
> 
> Wouldn't it be better to write safer code and not store
> a reference to the traceback object in the first place ?

In Python 3, tracebacks are stored as an attribute of the corresponding
exception:

>>> try: 1/0
... except Exception as _: e = _
... 
>>> e.__traceback__
<traceback object at 0x7ff69fdbf908>

> Also: What's the use case for creating traceback objects
> outside the Python interpreter core ?

He's not talking about creating traceback objects outside the core, but
being able to reuse tracebacks created by the core without keeping alive
a whole chain of objects.

It's a real need when you want to do careful error handling/reporting
without wasting too many resources. As already mentioned, Twisted has a
bunch of code to work around that problem, since errors can be quite
long-lived in a pipelined asynchronous execution model.


Antoine.




From alexander.belopolsky at gmail.com  Sat Jun 26 17:34:43 2010
From: alexander.belopolsky at gmail.com (Alexander Belopolsky)
Date: Sat, 26 Jun 2010 11:34:43 -0400
Subject: [Python-ideas] feature to make traceback objects usable without
	references to frame locals and globals
In-Reply-To: <4C25B918.8010307@canterbury.ac.nz>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
	<4C25B918.8010307@canterbury.ac.nz>
Message-ID: <AANLkTikgNqfvtctmGCL_0O0ML0HJZZ0lzNvMJaPse95Y@mail.gmail.com>

On Sat, Jun 26, 2010 at 4:23 AM, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> ghazel at gmail.com wrote:
..
> I'd like to take this further and remove the need for
> traceback objects to refer to a frame object at all.
> The standard traceback printout only needs two pieces of
> information from the traceback, the file name and line
> number.

Wouldn't that make it impossible to do postmortem analysis in pdb?


From mal at egenix.com  Sat Jun 26 23:53:04 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Sat, 26 Jun 2010 23:53:04 +0200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
In-Reply-To: <20100626140031.6adff16e@pitrou.net>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>	<4C25B918.8010307@canterbury.ac.nz>
	<4C25DE8A.1030209@egenix.com> <20100626140031.6adff16e@pitrou.net>
Message-ID: <4C2676C0.3000407@egenix.com>

Antoine Pitrou wrote:
> On Sat, 26 Jun 2010 13:03:38 +0200
> "M.-A. Lemburg" <mal at egenix.com> wrote:
>> Greg Ewing wrote:
>>> ghazel at gmail.com wrote:
>>>
>>>> I'm interested in a feature which allows users to discard the locals
>>>> and globals references from frames held by a traceback object.
>>
>> Wouldn't it be better to write safer code and not store
>> a reference to the traceback object in the first place ?
> 
> In Python 3, tracebacks are stored as an attribute of the corresponding
> exception:
> 
>>>> try: 1/0
> ... except Exception as _: e = _
> ... 
>>>> e.__traceback__
> <traceback object at 0x7ff69fdbf908>

Ouch.

So you explicitly need get rid off the traceback in Python3 if
you want to avoid keeping the associated objects alive during
exception processing ?

I think that design decision needs to be revisited. Tracebacks
are needed for error reporting, but (normally) not for
managing error handling or recovery.

E.g. it is not uncommon to store exception objects in a list for
later batched error reporting. With the traceback being referenced
on those object and the traceback chain keeping references to all
frames alive, this kind of processing won't be feasible
anymore.

What's even more important is that programmers are unlikely
going to be aware of this detail and its implications.

>> Also: What's the use case for creating traceback objects
>> outside the Python interpreter core ?
> 
> He's not talking about creating traceback objects outside the core, but
> being able to reuse tracebacks created by the core without keeping alive
> a whole chain of objects.

With the question I was referring to the suggestion by
Greg Ewing in which he seemed to imply that Pyrex and Cython
create traceback objects.

> It's a real need when you want to do careful error handling/reporting
> without wasting too many resources. As already mentioned, Twisted has a
> bunch of code to work around that problem, since errors can be quite
> long-lived in a pipelined asynchronous execution model.

With the above detail, I completely agree. In fact, more than that:
I think we should make storing the traceback in exception.__traceback__
optional and not the default, much like .__context__ and .__cause__.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 26 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                22 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From guido at python.org  Sun Jun 27 01:05:17 2010
From: guido at python.org (Guido van Rossum)
Date: Sat, 26 Jun 2010 16:05:17 -0700
Subject: [Python-ideas] feature to make traceback objects usable without
	references to frame locals and globals
In-Reply-To: <4C2676C0.3000407@egenix.com>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com> 
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com> 
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com>
Message-ID: <AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com>

Please don't act so surprised. There are about 4 relevant PEPs: 344,
3109, 3110, 3134 (the latter replacing 344). Also note that the
traceback is only kept alove if the exception object is explicitly
copied out of the except block that caught it -- normally the
exception object is deleted when that block is left.

--Guido

On Sat, Jun 26, 2010 at 2:53 PM, M.-A. Lemburg <mal at egenix.com> wrote:
> Antoine Pitrou wrote:
>> On Sat, 26 Jun 2010 13:03:38 +0200
>> "M.-A. Lemburg" <mal at egenix.com> wrote:
>>> Greg Ewing wrote:
>>>> ghazel at gmail.com wrote:
>>>>
>>>>> I'm interested in a feature which allows users to discard the locals
>>>>> and globals references from frames held by a traceback object.
>>>
>>> Wouldn't it be better to write safer code and not store
>>> a reference to the traceback object in the first place ?
>>
>> In Python 3, tracebacks are stored as an attribute of the corresponding
>> exception:
>>
>>>>> try: 1/0
>> ... except Exception as _: e = _
>> ...
>>>>> e.__traceback__
>> <traceback object at 0x7ff69fdbf908>
>
> Ouch.
>
> So you explicitly need get rid off the traceback in Python3 if
> you want to avoid keeping the associated objects alive during
> exception processing ?
>
> I think that design decision needs to be revisited. Tracebacks
> are needed for error reporting, but (normally) not for
> managing error handling or recovery.
>
> E.g. it is not uncommon to store exception objects in a list for
> later batched error reporting. With the traceback being referenced
> on those object and the traceback chain keeping references to all
> frames alive, this kind of processing won't be feasible
> anymore.
>
> What's even more important is that programmers are unlikely
> going to be aware of this detail and its implications.
>
>>> Also: What's the use case for creating traceback objects
>>> outside the Python interpreter core ?
>>
>> He's not talking about creating traceback objects outside the core, but
>> being able to reuse tracebacks created by the core without keeping alive
>> a whole chain of objects.
>
> With the question I was referring to the suggestion by
> Greg Ewing in which he seemed to imply that Pyrex and Cython
> create traceback objects.
>
>> It's a real need when you want to do careful error handling/reporting
>> without wasting too many resources. As already mentioned, Twisted has a
>> bunch of code to work around that problem, since errors can be quite
>> long-lived in a pipelined asynchronous execution model.
>
> With the above detail, I completely agree. In fact, more than that:
> I think we should make storing the traceback in exception.__traceback__
> optional and not the default, much like .__context__ and .__cause__.
>
> --
> Marc-Andre Lemburg
> eGenix.com
>
> Professional Python Services directly from the Source ?(#1, Jun 26 2010)
>>>> Python/Zope Consulting and Support ... ? ? ? ?http://www.egenix.com/
>>>> mxODBC.Zope.Database.Adapter ... ? ? ? ? ? ? http://zope.egenix.com/
>>>> mxODBC, mxDateTime, mxTextTools ... ? ? ? ?http://python.egenix.com/
> ________________________________________________________________________
> 2010-07-19: EuroPython 2010, Birmingham, UK ? ? ? ? ? ? ? ?22 days to go
>
> ::: Try our new mxODBC.Connect Python Database Interface for free ! ::::
>
>
> ? eGenix.com Software, Skills and Services GmbH ?Pastor-Loeh-Str.48
> ? ?D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
> ? ? ? ? ? Registered at Amtsgericht Duesseldorf: HRB 46611
> ? ? ? ? ? ? ? http://www.egenix.com/company/contact/
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> http://mail.python.org/mailman/listinfo/python-ideas
>



-- 
--Guido van Rossum (python.org/~guido)


From mal at egenix.com  Sun Jun 27 01:45:48 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Sun, 27 Jun 2010 01:45:48 +0200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
In-Reply-To: <AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com>
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com>
	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com>
Message-ID: <4C26912C.8010709@egenix.com>

Guido van Rossum wrote:
> Please don't act so surprised. There are about 4 relevant PEPs: 344,
> 3109, 3110, 3134 (the latter replacing 344).

I knew about the discussions around chained exception. I wasn't
aware of the idea to keep a traceback object on the exception object
itself.

PEP 3134 also mentioned the case we're currently discussing:

"""
Open Issue: Garbage Collection

    The strongest objection to this proposal has been that it creates
    cycles between exceptions and stack frames [12].  Collection of
    cyclic garbage (and therefore resource release) can be greatly
    delayed.

        >>> try:
        >>>   1/0
        >>> except Exception, err:
        >>>   pass

    will introduce a cycle from err -> traceback -> stack frame -> err,
    keeping all locals in the same scope alive until the next GC happens.

    Today, these locals would go out of scope.  There is lots of code
    which assumes that "local" resources -- particularly open files -- will
    be closed quickly.  If closure has to wait for the next GC, a program
    (which runs fine today) may run out of file handles.

    Making the __traceback__ attribute a weak reference would avoid the
    problems with cyclic garbage.  Unfortunately, it would make saving
    the Exception for later (as unittest does) more awkward, and it would
    not allow as much cleanup of the sys module.

    A possible alternate solution, suggested by Adam Olsen, would be to
    instead turn the reference from the stack frame to the 'err' variable
    into a weak reference when the variable goes out of scope [13].
"""

So obviously this case had already been discussed before.
Was a solution found and implemented that addresses the problem ?

> Also note that the
> traceback is only kept alove if the exception object is explicitly
> copied out of the except block that caught it -- normally the
> exception object is deleted when that block is left.

Right, but only if you do not use the exception object for other
purposes elsewhere.

If you do that a lot in your application, it appears that the
only way around keeping lots of traceback objects alive is
by explicitly setting .__traceback__ to None before storing
away the exception object.

Think of e.g. an application that does a long running calculation.
Such applications typically want to continue processing even
in case of errors and report all errors at the end of the
run. If a programmer is unaware of the traceback issue,
he'd likely run into a memory problem without really knowing
where to look for the cause.

Also note that garbage collection will not necessarily do what
the user expects: it is well possible that big amounts of
memory will stay allocated as unused space in pymalloc.
This is not specific to the discussed case, but still a valid
user concern. Greg Hazel observed this situation in his
example.

> --Guido
> 
> On Sat, Jun 26, 2010 at 2:53 PM, M.-A. Lemburg <mal at egenix.com> wrote:
>> Antoine Pitrou wrote:
>>> On Sat, 26 Jun 2010 13:03:38 +0200
>>> "M.-A. Lemburg" <mal at egenix.com> wrote:
>>>> Greg Ewing wrote:
>>>>> ghazel at gmail.com wrote:
>>>>>
>>>>>> I'm interested in a feature which allows users to discard the locals
>>>>>> and globals references from frames held by a traceback object.
>>>>
>>>> Wouldn't it be better to write safer code and not store
>>>> a reference to the traceback object in the first place ?
>>>
>>> In Python 3, tracebacks are stored as an attribute of the corresponding
>>> exception:
>>>
>>>>>> try: 1/0
>>> ... except Exception as _: e = _
>>> ...
>>>>>> e.__traceback__
>>> <traceback object at 0x7ff69fdbf908>
>>
>> Ouch.
>>
>> So you explicitly need get rid off the traceback in Python3 if
>> you want to avoid keeping the associated objects alive during
>> exception processing ?
>>
>> I think that design decision needs to be revisited. Tracebacks
>> are needed for error reporting, but (normally) not for
>> managing error handling or recovery.
>>
>> E.g. it is not uncommon to store exception objects in a list for
>> later batched error reporting. With the traceback being referenced
>> on those object and the traceback chain keeping references to all
>> frames alive, this kind of processing won't be feasible
>> anymore.
>>
>> What's even more important is that programmers are unlikely
>> going to be aware of this detail and its implications.
>>
>>>> Also: What's the use case for creating traceback objects
>>>> outside the Python interpreter core ?
>>>
>>> He's not talking about creating traceback objects outside the core, but
>>> being able to reuse tracebacks created by the core without keeping alive
>>> a whole chain of objects.
>>
>> With the question I was referring to the suggestion by
>> Greg Ewing in which he seemed to imply that Pyrex and Cython
>> create traceback objects.
>>
>>> It's a real need when you want to do careful error handling/reporting
>>> without wasting too many resources. As already mentioned, Twisted has a
>>> bunch of code to work around that problem, since errors can be quite
>>> long-lived in a pipelined asynchronous execution model.
>>
>> With the above detail, I completely agree. In fact, more than that:
>> I think we should make storing the traceback in exception.__traceback__
>> optional and not the default, much like .__context__ and .__cause__.
>>
>> --
>> Marc-Andre Lemburg
>> eGenix.com
>>
>> Professional Python Services directly from the Source  (#1, Jun 26 2010)
>>>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
>> ________________________________________________________________________
>> 2010-07-19: EuroPython 2010, Birmingham, UK                22 days to go
>>
>> ::: Try our new mxODBC.Connect Python Database Interface for free ! ::::
>>
>>
>>   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
>>    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
>>           Registered at Amtsgericht Duesseldorf: HRB 46611
>>               http://www.egenix.com/company/contact/
>> _______________________________________________
>> Python-ideas mailing list
>> Python-ideas at python.org
>> http://mail.python.org/mailman/listinfo/python-ideas
>>
> 
> 
> 

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 27 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                21 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From ncoghlan at gmail.com  Sun Jun 27 02:21:41 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Sun, 27 Jun 2010 10:21:41 +1000
Subject: [Python-ideas] feature to make traceback objects usable without
	references to frame locals and globals
In-Reply-To: <4C2676C0.3000407@egenix.com>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com>
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com>
Message-ID: <AANLkTimrrQMiYmtbaSyRRuBISHIKzukyXyJ9sMxgk24j@mail.gmail.com>

On Sun, Jun 27, 2010 at 7:53 AM, M.-A. Lemburg <mal at egenix.com> wrote:
>> He's not talking about creating traceback objects outside the core, but
>> being able to reuse tracebacks created by the core without keeping alive
>> a whole chain of objects.
>
> With the question I was referring to the suggestion by
> Greg Ewing in which he seemed to imply that Pyrex and Cython
> create traceback objects.

When Python code calls into Pyrex/C code which then call back into
Python, I understand they insert dummy frames into the tracebacks to
make the call stack more complete.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia


From danieldelay at gmail.com  Sun Jun 27 08:45:35 2010
From: danieldelay at gmail.com (Daniel DELAY)
Date: Sun, 27 Jun 2010 08:45:35 +0200
Subject: [Python-ideas] explicitation lines in python ?
In-Reply-To: <AANLkTimV6aM2KN6x3l0iee-n8xR3SOEMI6j9KFh8Tgo8@mail.gmail.com>
References: <4C24FEAF.4030304@gmail.com> <4C256505.9000703@gmail.com>
	<AANLkTimV6aM2KN6x3l0iee-n8xR3SOEMI6j9KFh8Tgo8@mail.gmail.com>
Message-ID: <4C26F38F.4080605@gmail.com>

Le 26/06/2010 04:36, Guido van Rossum a ?crit :
> I don't know where you got the word "explicitation" -- I've never
> heard of it. (Maybe it's French? You sound French. :-) However, this
> feature existed in ABC under the name "refinement". See
> http://homepages.cwi.nl/~steven/abc/qr.html#Refinements
>    
You guessed well : I'm french and a bad english speaker :-( , 
"explicitation" was a mistaken translation.

 From now I'll use the term "refinement" from ABC as this language 
inspired Python

Thanks for the link. It's a bit difficult for me to figure out precisely 
how this feature works in ABC, this quick reference lacks of examples to 
illustrate each concept and it seems difficult to find more documentation.

But I'm happy to see this feature has been implemented in few languages.

Cheers

--
Daniel


From danieldelay at gmail.com  Sun Jun 27 08:52:48 2010
From: danieldelay at gmail.com (Daniel DELAY)
Date: Sun, 27 Jun 2010 08:52:48 +0200
Subject: [Python-ideas] explicitation lines in python ?
In-Reply-To: <87pqzemrdw.fsf@uwakimon.sk.tsukuba.ac.jp>
References: <4C24FEAF.4030304@gmail.com>
	<87pqzemrdw.fsf@uwakimon.sk.tsukuba.ac.jp>
Message-ID: <4C26F540.3050101@gmail.com>

Le 26/06/2010 04:58, Stephen J. Turnbull a ?crit :
> the more verbose
>
>      htmltable = ''.join('<tr>{}</tr>'.format(htmlline) for line in table) \
>          with htmlline = ''.join('<td>{}</td>'.format(cell) for cell in line)
>
> looks better.  Note that the "with" clause becomes an optional part of
> an assignment statement rather than a suite controlled by the
> assignment, and the indentation is decorative rather than syntactic.
>    
This syntax on one line is interesting if we see "refinement" as a way 
to make more readable a too long line.

But I'm not sure wether this syntax is compatible with nesting different 
levels of refinement in a recursive way, as I did in an example.

Using "with" as an optional part of assignment seems to me rather 
restrictive, as too complex expressions may appear anywhere involving an 
expression, not only where expressions are assigned to a variable with "=".

Cheers,

Daniel


From danieldelay at gmail.com  Sun Jun 27 09:00:32 2010
From: danieldelay at gmail.com (Daniel DELAY)
Date: Sun, 27 Jun 2010 09:00:32 +0200
Subject: [Python-ideas] explicitation lines in python ?
In-Reply-To: <AANLkTikV18QVu8bKJC2XeVtYNJ02MJF0cMGibLIg1XmX@mail.gmail.com>
References: <4C24FEAF.4030304@gmail.com>
	<AANLkTikV18QVu8bKJC2XeVtYNJ02MJF0cMGibLIg1XmX@mail.gmail.com>
Message-ID: <4C26F710.4030902@gmail.com>

Le 26/06/2010 05:26, Chris Rebert a ?crit :
>> If we could  explicitate
>>      
> That's not a word.
>    
Sorry that was a wrong translation of french word "expliciter" 
(http://www.wordreference.com/fren/expliciter) wich means something like 
to express or to describe with more details.
>> explicitation(s) line(s)
>>      
> Again, not a word, and not a great name for this either, IMO.
>    
Yes perhaps "refinement line" would be better as GVR noticed this 
feature is named "refinement" in ABC.
> What do you mean, you can put them right next to each other, and even
> better, give the expression a meaningful name:
>
> def line2html(line):
>      return ''.join( '<td>{}</td>'.format(cell) for cell in line)
> htmltable = ''.join( '<tr>{}</tr>'.format(line2html(line)) for line in table)
>    
Yes that's a solution if this piece of code is run only once.
But if this piece of code is in a function (or a loop), your function 
line2html will be redefined for each function call (or iteration), wich 
is something I usually try to avoid.
> In other words, "where" clauses, ? la Haskell (see
> http://www.haskell.org/tutorial/patterns.html  section 4.5); just tweak
> the syntax from
>
> expr_involving_bar :
>      bar : expr
>
> to
>
> expr_involving_bar where:
>      bar = expr
>    
Oh yes that seems to be the same feature.
> and the equals sign makes more sense anyway.
>    
Equals sign make sense if you see that feature as a variable assigment, 
but this would make more difficult to distinguish refinement lines from 
classical indented lines.

I used ":" because I see that more like something equivalent to a {key: 
value}  substitution wich would be done before execution.

In fact there are several options about how refinement option could work

a) refinement as a code substitution :
In this option, refinement is just a syntactic sugar.
The refinement name ("htmlline" in my example) is replaced with the 
refinement expression it means before execution of python code.
As "htmlline" dsappear from code at the time of execution, it will not 
populate locals().

b) refinement as variable assignment.
In this option, the values of htmlline are really stored in a local 
variable "htmlline", wich will remain in locals() after execution of 
this line.

As "htmlline" is not intented to be used elsewhere in the code, I would 
probably prefer option a), but the pure substitution option has a 
disadvantage : when the refinement name "htmlline" appears twice or more 
in the main line, the same expression is evaluated twice or more wich is 
probably not what we want.

That's why I would in fact prefer a third option
c) a refinement expression is only evaluated once even if the refinement 
name (ex: "htmlline") appears twice or more, but that name is not 
published in locals().


Cheers

--
Daniel



From pyideas at rebertia.com  Sun Jun 27 09:45:47 2010
From: pyideas at rebertia.com (Chris Rebert)
Date: Sun, 27 Jun 2010 00:45:47 -0700
Subject: [Python-ideas] explicitation lines in python ?
In-Reply-To: <4C26F710.4030902@gmail.com>
References: <4C24FEAF.4030304@gmail.com>
	<AANLkTikV18QVu8bKJC2XeVtYNJ02MJF0cMGibLIg1XmX@mail.gmail.com>
	<4C26F710.4030902@gmail.com>
Message-ID: <AANLkTikKcP9FP1LPsnCXiCrzhPcraz7TxVJn8u_GjezT@mail.gmail.com>

On Sun, Jun 27, 2010 at 12:00 AM, Daniel DELAY <danieldelay at gmail.com> wrote:
> Le 26/06/2010 05:26, Chris Rebert a ?crit :
<snip>
>> What do you mean, you can put them right next to each other, and even
>> better, give the expression a meaningful name:
>>
>> def line2html(line):
>> ? ? return ''.join( '<td>{}</td>'.format(cell) for cell in line)
>> htmltable = ''.join( '<tr>{}</tr>'.format(line2html(line)) for line in
>> table)
>>
>
> Yes that's a solution if this piece of code is run only once.
> But if this piece of code is in a function (or a loop), your function
> line2html will be redefined for each function call (or iteration), wich is
> something I usually try to avoid.

Yes, but Premature Optimization is The Root of All Evil, and you can
always define line2html() at the module level; this trades distance
for speed, but if you chose a good descriptive name, it should still
be plenty clear.

Cheers,
Chris
--
Again, "where" clauses are certainly an intriguing idea.
http://blog.rebertia.com


From stephen at xemacs.org  Sun Jun 27 15:31:34 2010
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Sun, 27 Jun 2010 22:31:34 +0900
Subject: [Python-ideas] explicitation lines in python ?
In-Reply-To: <4C26F540.3050101@gmail.com>
References: <4C24FEAF.4030304@gmail.com>
	<87pqzemrdw.fsf@uwakimon.sk.tsukuba.ac.jp>
	<4C26F540.3050101@gmail.com>
Message-ID: <878w60mwjt.fsf@uwakimon.sk.tsukuba.ac.jp>

Daniel DELAY writes:
 > Le 26/06/2010 04:58, Stephen J. Turnbull a ?crit :
 > > the more verbose
 > >
 > >      htmltable = ''.join('<tr>{}</tr>'.format(htmlline) for line in table) \
 > >          with htmlline = ''.join('<td>{}</td>'.format(cell) for cell in line)
 > >
 > > looks better.  Note that the "with" clause becomes an optional part of
 > > an assignment statement rather than a suite controlled by the
 > > assignment, and the indentation is decorative rather than syntactic.
 > >    
 > This syntax on one line is interesting if we see "refinement" as a way 
 > to make more readable a too long line.
 > 
 > But I'm not sure wether this syntax is compatible with nesting different 
 > levels of refinement in a recursive way, as I did in an example.

I'm not sure of all the corner cases myself, but it seems to me that
the above example could be extended to

    htmltable = ''.join(tr.format(htmlline) for line in table) \
        with tr = '<tr>{}</tr>', \
        htmlline = ''.join(td.format(cell) for cell in line) \
            with td = '<td>{}</td>'

although it's not as prettily formatted as your examples.

 > Using "with" as an optional part of assignment seems to me rather 
 > restrictive, as too complex expressions may appear anywhere involving an 
 > expression, not only where expressions are assigned to a variable with "=".

That was deliberate.  If it's not an assignment, it's easy enough (and
preserves locality) to insert an assignment to a new variable on the
preceding line.


From guido at python.org  Sun Jun 27 17:33:23 2010
From: guido at python.org (Guido van Rossum)
Date: Sun, 27 Jun 2010 08:33:23 -0700
Subject: [Python-ideas] feature to make traceback objects usable without
	references to frame locals and globals
In-Reply-To: <4C26912C.8010709@egenix.com>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com> 
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com> 
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com> 
	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com> 
	<4C26912C.8010709@egenix.com>
Message-ID: <AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com>

On Sat, Jun 26, 2010 at 4:45 PM, M.-A. Lemburg <mal at egenix.com> wrote:
> Also note that garbage collection will not necessarily do what
> the user expects: it is well possible that big amounts of
> memory will stay allocated as unused space in pymalloc.
> This is not specific to the discussed case, but still a valid
> user concern. Greg Hazel observed this situation in his
> example.

Aha. So whereas the process size ballooned, there is no actual memory
leak (his example threw away the exception each time through the
loop), it's just that looking at process size is a bad way to assess
memory leaks. I would like to reject this then as "that's just how
Python's memory allocation works". As you say, it's not specific to
this case; it comes up occasionally and it's just a matter of user
education.

I don't think anything should be done about __traceback__ either --
frameworks that have this problem can work around it in various ways.
Or, at least I don't see a reason to panic and roll back the feature.
Maybe eventually it can be improved by adding some kind of
functionality to control some details of the behavior.

-- 
--Guido van Rossum (python.org/~guido)


From benjamin at python.org  Sun Jun 27 18:48:37 2010
From: benjamin at python.org (Benjamin Peterson)
Date: Sun, 27 Jun 2010 16:48:37 +0000 (UTC)
Subject: [Python-ideas] feature to make traceback objects usable without
	references to frame locals and globals
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>	<4C25B918.8010307@canterbury.ac.nz>
	<4C25DE8A.1030209@egenix.com> <20100626140031.6adff16e@pitrou.net>
	<4C2676C0.3000407@egenix.com>
Message-ID: <loom.20100627T184756-354@post.gmane.org>

M.-A. Lemburg <mal at ...> writes:
> With the above detail, I completely agree. In fact, more than that:
> I think we should make storing the traceback in exception.__traceback__
> optional and not the default, much like .__context__ and .__cause__.

I'm not sure why you consider __context__ non-default, since it is always
automatically set when it applies.






From ghazel at gmail.com  Sun Jun 27 19:04:54 2010
From: ghazel at gmail.com (ghazel at gmail.com)
Date: Sun, 27 Jun 2010 10:04:54 -0700
Subject: [Python-ideas] feature to make traceback objects usable without
	references to frame locals and globals
In-Reply-To: <AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com> 
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com> 
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com> 
	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com> 
	<4C26912C.8010709@egenix.com>
	<AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com>
Message-ID: <AANLkTimdmswsI-ALmEuXOlb3FpXKOzT5_gyWl77VZUbL@mail.gmail.com>

On Sun, Jun 27, 2010 at 8:33 AM, Guido van Rossum <guido at python.org> wrote:
> On Sat, Jun 26, 2010 at 4:45 PM, M.-A. Lemburg <mal at egenix.com> wrote:
>> Also note that garbage collection will not necessarily do what
>> the user expects: it is well possible that big amounts of
>> memory will stay allocated as unused space in pymalloc.
>> This is not specific to the discussed case, but still a valid
>> user concern. Greg Hazel observed this situation in his
>> example.
>
> Aha. So whereas the process size ballooned, there is no actual memory
> leak (his example threw away the exception each time through the
> loop), it's just that looking at process size is a bad way to assess
> memory leaks. I would like to reject this then as "that's just how
> Python's memory allocation works". As you say, it's not specific to
> this case; it comes up occasionally and it's just a matter of user
> education.

Leak? My example does not try to demonstrate a leak. It demonstrates
excessive allocation. If you collect a few times after the test the
memory usage of the process does drop to a reasonable level again. In
a real-world application with long-lived traceback objects and more
state, this excessive allocation becomes crippling. Go ahead, add a
zero to the size of that list being created in the example. Without
the traceback reference the process stays stable at 17MB, with the
reference it balloons to consume all of the 2GB of RAM in my laptop,
causing swapping. This is similar to the observed behavior of a real
application, which is completely stable and requires relatively little
memory when not using traceback objects, but quickly grows to an
unmanageable size with traceback objects.

> I don't think anything should be done about __traceback__ either --
> frameworks that have this problem can work around it in various ways.
> Or, at least I don't see a reason to panic and roll back the feature.
> Maybe eventually it can be improved by adding some kind of
> functionality to control some details of the behavior.

This idea is about an improvement to control some details of the
behavior. Keeping __traceback__ in more cases would be nothing to
"panic" about, if tracebacks were not such "unsafe" objects. I have
not yet seen any way for a framework to work around the references
issue without discarding the traceback object entirely and losing the
ability to re-raise.

-Greg


From mal at egenix.com  Sun Jun 27 19:20:07 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Sun, 27 Jun 2010 19:20:07 +0200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
In-Reply-To: <AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com>
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com>
	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com>
	<4C26912C.8010709@egenix.com>
	<AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com>
Message-ID: <4C278847.5040600@egenix.com>

Guido van Rossum wrote:
> On Sat, Jun 26, 2010 at 4:45 PM, M.-A. Lemburg <mal at egenix.com> wrote:
>> Also note that garbage collection will not necessarily do what
>> the user expects: it is well possible that big amounts of
>> memory will stay allocated as unused space in pymalloc.
>> This is not specific to the discussed case, but still a valid
>> user concern. Greg Hazel observed this situation in his
>> example.
> 
> Aha. So whereas the process size ballooned, there is no actual memory
> leak (his example threw away the exception each time through the
> loop), it's just that looking at process size is a bad way to assess
> memory leaks. I would like to reject this then as "that's just how
> Python's memory allocation works". As you say, it's not specific to
> this case; it comes up occasionally and it's just a matter of user
> education.

pymalloc has gotten a lot better since it was fixed in Python 2.5
to return unused chunks of memory to the OS, but we still have the
issue of fragmented arenas with cases of just a few bytes
keeping 256kB (the size of an arena) allocated.

> I don't think anything should be done about __traceback__ either --
> frameworks that have this problem can work around it in various ways.
> Or, at least I don't see a reason to panic and roll back the feature.
> Maybe eventually it can be improved by adding some kind of
> functionality to control some details of the behavior.

Not necessarily roll back the feature, but an implementation
that deliberately introduces circular references is not really
ideal.

Since tracebacks on exceptions are rarely used by applications,
I think it would be better to turn them into weak references.

The arguments against doing this in the PEP appear rather
weak compared to the potential issue for non-expert Python
programmers:

"""
    Making the __traceback__ attribute a weak reference would avoid the
    problems with cyclic garbage.  Unfortunately, it would make saving
    the Exception for later (as unittest does) more awkward, and it would
    not allow as much cleanup of the sys module.
"""

Special use cases that want to save the traceback for later use
can always explicitly convert the traceback into a real (non-weak)
reference. I don't understand the reference to the sys module
cleanup, so can't comment on that.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 27 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                21 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From solipsis at pitrou.net  Sun Jun 27 20:00:01 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Sun, 27 Jun 2010 20:00:01 +0200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com>
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com>
	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com>
	<4C26912C.8010709@egenix.com>
	<AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com>
	<4C278847.5040600@egenix.com>
Message-ID: <20100627200001.4821ede9@pitrou.net>

On Sun, 27 Jun 2010 19:20:07 +0200
"M.-A. Lemburg" <mal at egenix.com> wrote:
> 
> Not necessarily roll back the feature, but an implementation
> that deliberately introduces circular references is not really
> ideal.
> 
> Since tracebacks on exceptions are rarely used by applications,
> I think it would be better to turn them into weak references.

How do you manage to get a strong reference before the traceback object
gets deleted?
Besides, an API which gives some information in an unreliable manner
does not seem very user-friendly to me.

I think I like the OP's idea better: allow to release the references to
local and global variables from the frames in the traceback. This is
keeps a lot of potentially large objects alive - some of which may also
keep some OS resources busy.




From brett at python.org  Sun Jun 27 23:11:14 2010
From: brett at python.org (Brett Cannon)
Date: Sun, 27 Jun 2010 14:11:14 -0700
Subject: [Python-ideas] feature to make traceback objects usable without
	references to frame locals and globals
In-Reply-To: <4C278847.5040600@egenix.com>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com> 
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com> 
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com> 
	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com> 
	<4C26912C.8010709@egenix.com>
	<AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com> 
	<4C278847.5040600@egenix.com>
Message-ID: <AANLkTinTqM3Tu2cNnti2ukXBgr94Q4FG3cVeI_rlMV7g@mail.gmail.com>

On Sun, Jun 27, 2010 at 10:20, M.-A. Lemburg <mal at egenix.com> wrote:
> Guido van Rossum wrote:
>> On Sat, Jun 26, 2010 at 4:45 PM, M.-A. Lemburg <mal at egenix.com> wrote:
>>> Also note that garbage collection will not necessarily do what
>>> the user expects: it is well possible that big amounts of
>>> memory will stay allocated as unused space in pymalloc.
>>> This is not specific to the discussed case, but still a valid
>>> user concern. Greg Hazel observed this situation in his
>>> example.
>>
>> Aha. So whereas the process size ballooned, there is no actual memory
>> leak (his example threw away the exception each time through the
>> loop), it's just that looking at process size is a bad way to assess
>> memory leaks. I would like to reject this then as "that's just how
>> Python's memory allocation works". As you say, it's not specific to
>> this case; it comes up occasionally and it's just a matter of user
>> education.
>
> pymalloc has gotten a lot better since it was fixed in Python 2.5
> to return unused chunks of memory to the OS, but we still have the
> issue of fragmented arenas with cases of just a few bytes
> keeping 256kB (the size of an arena) allocated.
>
>> I don't think anything should be done about __traceback__ either --
>> frameworks that have this problem can work around it in various ways.
>> Or, at least I don't see a reason to panic and roll back the feature.
>> Maybe eventually it can be improved by adding some kind of
>> functionality to control some details of the behavior.
>
> Not necessarily roll back the feature, but an implementation
> that deliberately introduces circular references is not really
> ideal.

But the circular reference only occurs if you store a reference
outside the 'except' clause; Python 3 explicitly deletes any caught
exception variable to prevent the loop.

>
> Since tracebacks on exceptions are rarely used by applications,
> I think it would be better to turn them into weak references.
>

While I would be fine with that if that many people save raised
exceptions outside of an 'except' clause, I doubt that happens very
often and there would be backward-compatibility issues at this point.


From timothy.c.delaney at gmail.com  Sun Jun 27 23:17:32 2010
From: timothy.c.delaney at gmail.com (Tim Delaney)
Date: Mon, 28 Jun 2010 07:17:32 +1000
Subject: [Python-ideas] feature to make traceback objects usable without
	references to frame locals and globals
In-Reply-To: <20100627200001.4821ede9@pitrou.net>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com>
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com>
	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com>
	<4C26912C.8010709@egenix.com>
	<AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com>
	<4C278847.5040600@egenix.com> <20100627200001.4821ede9@pitrou.net>
Message-ID: <AANLkTikyv6Vjf-yjbKS16zT1_kDXQiicKsI4jAbdwQx7@mail.gmail.com>

On 28 June 2010 04:00, Antoine Pitrou <solipsis at pitrou.net> wrote:

> On Sun, 27 Jun 2010 19:20:07 +0200
> "M.-A. Lemburg" <mal at egenix.com> wrote:
> >
> > Not necessarily roll back the feature, but an implementation
> > that deliberately introduces circular references is not really
> > ideal.
> >
> > Since tracebacks on exceptions are rarely used by applications,
> > I think it would be better to turn them into weak references.
>
> How do you manage to get a strong reference before the traceback object
> gets deleted?
>

At the beginning of the 'except' block, a strong local (but hidden)
reference is obtained to the traceback (if it exists). This is deleted at
the end of the 'except' block.

Besides, an API which gives some information in an unreliable manner
> does not seem very user-friendly to me.
>
> I think I like the OP's idea better: allow to release the references to
> local and global variables from the frames in the traceback. This is
> keeps a lot of potentially large objects alive - some of which may also
> keep some OS resources busy.


I agree, with a variation - keep a weak reference to the frame in the
traceback, and have a way for the application to specify that it wants to
retain strong references to frames (so unittest for example can guarantee
access to locals and globals). Possibly a context manager could be used for
this, and decorators could be used to wrap an entire method in the context
manager.

A dummy frame would also be stored that contained enough info to replicate
the existing stack trace (file, line number, etc). A strong reference could
be obtained via the existing attribute, converted to a property, which does:

a. return the internal reference if it is not a dummy frame;
b. return the result of the weak reference if it still exists;
c. return the dummy frame reference.

I think this gives us the best of all worlds:

1. No strong reference to locals/globals in tracebacks by default;

2. Able to force strong references to frames;

3. We don't lose the ability to compose a full and complete stack trace.

Tim Delaney
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100628/b5a0403c/attachment.html>

From ncoghlan at gmail.com  Mon Jun 28 08:25:51 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 28 Jun 2010 16:25:51 +1000
Subject: [Python-ideas] explicitation lines in python ?
In-Reply-To: <4C26F710.4030902@gmail.com>
References: <4C24FEAF.4030304@gmail.com>
	<AANLkTikV18QVu8bKJC2XeVtYNJ02MJF0cMGibLIg1XmX@mail.gmail.com>
	<4C26F710.4030902@gmail.com>
Message-ID: <AANLkTilXEDYjuUUd4mpKGYUTCCLeZhVhsWGPqYx7N-ZJ@mail.gmail.com>

On Sun, Jun 27, 2010 at 5:00 PM, Daniel DELAY <danieldelay at gmail.com> wrote:
> Le 26/06/2010 05:26, Chris Rebert a ?crit :
> That's why I would in fact prefer a third option
> c) a refinement expression is only evaluated once even if the refinement
> name (ex: "htmlline") appears twice or more, but that name is not published
> in locals().

That's definitely the same goal as the old "statement local namespace"
idea. Trawling through a couple of old threads, the various keywords
suggested were:

- with (pro: already a keyword, con: now has completely different
meaning from normal with statement)
- where (pro: same name as used in Haskell, con: new keyword, also
completely different meaning from the SQL meaning many programmers
will find more familiar)
- using (pro: completely made up name at the time, con: new keyword,
now conflicts with the C# equivalents to Python's with and import
statements)

These days, I'm personally inclined to favour Haskell's "where"
terminology, but that preference isn't particularly strong.

The availability of "nonlocal" binding semantics also makes the
semantics much easier to define than they were in those previous
discussions (the lack of clear semantics for name binding statements
with an attached local namespace was the major factor blocking
creation of a reference implementation for this proposal back then).

For example:

 c = sqrt(a*a + b*b) where:
    a = retrieve_a()
    b = retrieve_b()

could translate to something like:

  def _anon(): # *(see below)
    nonlocal c
    a = retrieve_a()
    b = retrieve_b()
    c = sqrt(a*a + b*b)
 _anon()

*(unlike Python code, the compiler can make truly anonymous functions
by storing them solely on the VM stack. It already does this when
executing class definitions):

The major question mark over the idea is whether or not it would
actually help or hinder readability in practice. This is a question
that could be addressed by someone trawling the standard library or
other large public Python code bases (e.g. things in SciPy, the
assorted web frameworks, bzr, hg) for existing code that could be made
clearer if less important details could easily be moved out of the way
into an indented suite.

It's also something that has no chance of being accepted until after
the language moratorium ends.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia


From mal at egenix.com  Mon Jun 28 13:14:21 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Mon, 28 Jun 2010 13:14:21 +0200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
In-Reply-To: <20100627200001.4821ede9@pitrou.net>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>	<4C25B918.8010307@canterbury.ac.nz>
	<4C25DE8A.1030209@egenix.com>	<20100626140031.6adff16e@pitrou.net>
	<4C2676C0.3000407@egenix.com>	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com>	<4C26912C.8010709@egenix.com>	<AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com>	<4C278847.5040600@egenix.com>
	<20100627200001.4821ede9@pitrou.net>
Message-ID: <4C28840D.5040703@egenix.com>

Antoine Pitrou wrote:
> On Sun, 27 Jun 2010 19:20:07 +0200
> "M.-A. Lemburg" <mal at egenix.com> wrote:
>>
>> Not necessarily roll back the feature, but an implementation
>> that deliberately introduces circular references is not really
>> ideal.
>>
>> Since tracebacks on exceptions are rarely used by applications,
>> I think it would be better to turn them into weak references.
> 
> How do you manage to get a strong reference before the traceback object
> gets deleted?

IIUC, the traceback object will still be alive during processing
of the except clause, so all you'd have to do is turn the weak reference
into a real one. Let's assume that the weakref object is called
.__traceback_weakref__ and the proxy called .__traceback__ (to assure
compatibility).

...
except TypeError as exc:

    # Replace the weakref object with the referenced object
    exc.__traceback__ = exc.__traceback_weakref__()

    # Set the weakref object to None to have it collected and to
    # signal this operation to other code knowing about this
    # strategy.
    exc.__traceback_weakref__ = None

BTW: I wonder why proxy objects don't provide a direct access to
the weakref object they are using. That would make keeping that
extra variable around unnecessary.

> Besides, an API which gives some information in an unreliable manner
> does not seem very user-friendly to me.

The argument so far has been that most error processing happens
in the except clause itself, making it unnecessary to deal with
possible circular references. That is certainly true in many cases.

Now under that argument, using the traceback stored on an
exception outside the except clause is even less likely to be
needed, so I don't follow your concern that using a weak
reference is less user-friendly.

Perhaps someone could highlight a use case where the traceback
is needed outside the except clause ?!

> I think I like the OP's idea better: allow to release the references to
> local and global variables from the frames in the traceback. This is
> keeps a lot of potentially large objects alive - some of which may also
> keep some OS resources busy.

It's certainly a good idea to pay extra attention to this in
Python3.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 28 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                20 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From mal at egenix.com  Mon Jun 28 13:31:44 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Mon, 28 Jun 2010 13:31:44 +0200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
In-Reply-To: <AANLkTimrrQMiYmtbaSyRRuBISHIKzukyXyJ9sMxgk24j@mail.gmail.com>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>	<4C25B918.8010307@canterbury.ac.nz>
	<4C25DE8A.1030209@egenix.com>	<20100626140031.6adff16e@pitrou.net>
	<4C2676C0.3000407@egenix.com>
	<AANLkTimrrQMiYmtbaSyRRuBISHIKzukyXyJ9sMxgk24j@mail.gmail.com>
Message-ID: <4C288820.2000707@egenix.com>

Nick Coghlan wrote:
> On Sun, Jun 27, 2010 at 7:53 AM, M.-A. Lemburg <mal at egenix.com> wrote:
>>> He's not talking about creating traceback objects outside the core, but
>>> being able to reuse tracebacks created by the core without keeping alive
>>> a whole chain of objects.
>>
>> With the question I was referring to the suggestion by
>> Greg Ewing in which he seemed to imply that Pyrex and Cython
>> create traceback objects.
> 
> When Python code calls into Pyrex/C code which then call back into
> Python, I understand they insert dummy frames into the tracebacks to
> make the call stack more complete.

Thanks for that bit of information. I suppose they do this for
better error reporting, right ?

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 28 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                20 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From ncoghlan at gmail.com  Mon Jun 28 14:10:09 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Mon, 28 Jun 2010 22:10:09 +1000
Subject: [Python-ideas] feature to make traceback objects usable without
	references to frame locals and globals
In-Reply-To: <4C288820.2000707@egenix.com>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com>
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com>
	<AANLkTimrrQMiYmtbaSyRRuBISHIKzukyXyJ9sMxgk24j@mail.gmail.com>
	<4C288820.2000707@egenix.com>
Message-ID: <AANLkTikLSCIysTagkVnNkapFxIyMtQIqeMG5bq5ERT2W@mail.gmail.com>

On Mon, Jun 28, 2010 at 9:31 PM, M.-A. Lemburg <mal at egenix.com> wrote:
> Nick Coghlan wrote:
>> When Python code calls into Pyrex/C code which then call back into
>> Python, I understand they insert dummy frames into the tracebacks to
>> make the call stack more complete.
>
> Thanks for that bit of information. I suppose they do this for
> better error reporting, right ?

I believe so, but keep in mind that I've never actually used them
myself, I've just seen this behaviour described elsewhere.

It makes sense for them to do it though, since following a call stack
through a plain C or C++ extension module can get rather confusing at
times.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia


From solipsis at pitrou.net  Mon Jun 28 14:39:24 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 28 Jun 2010 14:39:24 +0200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com>
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com>
	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com>
	<4C26912C.8010709@egenix.com>
	<AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com>
	<4C278847.5040600@egenix.com> <20100627200001.4821ede9@pitrou.net>
	<4C28840D.5040703@egenix.com>
Message-ID: <20100628143924.766c056d@pitrou.net>

On Mon, 28 Jun 2010 13:14:21 +0200
"M.-A. Lemburg" <mal at egenix.com> wrote:
> 
> BTW: I wonder why proxy objects don't provide a direct access to
> the weakref object they are using. That would make keeping that
> extra variable around unnecessary.

Probably because the proxy would then have an additional attribute
which isn't on the proxied object. Or, worse, it could also shadow
one of the proxied object's existing attributes.

> Perhaps someone could highlight a use case where the traceback
> is needed outside the except clause ?!

Well, it's needed if you want delayed error reporting and still display
a comprehensive stack trace (rather than just the exception message).
Frameworks often need this kind of behaviour; Twisted was already
mentioned in this thread. But, even outside of frameworks, there are
situations where you want to process a bunch of data and present all
processing errors at the end.

However, as the OP argued, most often you need the traceback in order
to display file names and line numbers, but you don't need the attached
variables (locals and globals).

Regards

Antoine.




From mal at egenix.com  Mon Jun 28 15:29:25 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Mon, 28 Jun 2010 15:29:25 +0200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
In-Reply-To: <20100628143924.766c056d@pitrou.net>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>	<4C25B918.8010307@canterbury.ac.nz>
	<4C25DE8A.1030209@egenix.com>	<20100626140031.6adff16e@pitrou.net>
	<4C2676C0.3000407@egenix.com>	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com>	<4C26912C.8010709@egenix.com>	<AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com>	<4C278847.5040600@egenix.com>
	<20100627200001.4821ede9@pitrou.net>	<4C28840D.5040703@egenix.com>
	<20100628143924.766c056d@pitrou.net>
Message-ID: <4C28A3B5.9000100@egenix.com>

Antoine Pitrou wrote:
> On Mon, 28 Jun 2010 13:14:21 +0200
> "M.-A. Lemburg" <mal at egenix.com> wrote:
>>
>> BTW: I wonder why proxy objects don't provide a direct access to
>> the weakref object they are using. That would make keeping that
>> extra variable around unnecessary.
> 
> Probably because the proxy would then have an additional attribute
> which isn't on the proxied object. Or, worse, it could also shadow
> one of the proxied object's existing attributes.

That's a very weak argument, IMHO. It all depends on the
naming of the attribute. Also note that the proxied object
won't know anything about that attribute, so it doesn't have
any side-effects.

We've used such an approach on our mxProxy object for years without
any problems or naming conflicts so far:

http://www.egenix.com/products/python/mxBase/mxProxy/
http://www.egenix.com/products/python/mxBase/mxProxy/doc/#_Toc162774452

>> Perhaps someone could highlight a use case where the traceback
>> is needed outside the except clause ?!
> 
> Well, it's needed if you want delayed error reporting and still display
> a comprehensive stack trace (rather than just the exception message).
> Frameworks often need this kind of behaviour; Twisted was already
> mentioned in this thread. But, even outside of frameworks, there are
> situations where you want to process a bunch of data and present all
> processing errors at the end.

I had already given that example myself, but in those cases I had
in mind the stack trace is not really needed: instead, you add the
relevant information to the list of errors directly from the
except clause, since the error information needed to report
the issues is not related to programming errors, but instead to
data errors.

> However, as the OP argued, most often you need the traceback in order
> to display file names and line numbers, but you don't need the attached
> variables (locals and globals).

I guess all this just needs to be highlighted in the documentation
to make programmers aware of the fact that they cannot just store
exception objects away without considering the consequences of this
first.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 28 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                20 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From solipsis at pitrou.net  Mon Jun 28 16:10:18 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Mon, 28 Jun 2010 16:10:18 +0200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com>
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com>
	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com>
	<4C26912C.8010709@egenix.com>
	<AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com>
	<4C278847.5040600@egenix.com> <20100627200001.4821ede9@pitrou.net>
	<4C28840D.5040703@egenix.com> <20100628143924.766c056d@pitrou.net>
	<4C28A3B5.9000100@egenix.com>
Message-ID: <20100628161018.15705b73@pitrou.net>

On Mon, 28 Jun 2010 15:29:25 +0200
"M.-A. Lemburg" <mal at egenix.com> wrote:
> Antoine Pitrou wrote:
> > On Mon, 28 Jun 2010 13:14:21 +0200
> > "M.-A. Lemburg" <mal at egenix.com> wrote:
> >>
> >> BTW: I wonder why proxy objects don't provide a direct access to
> >> the weakref object they are using. That would make keeping that
> >> extra variable around unnecessary.
> > 
> > Probably because the proxy would then have an additional attribute
> > which isn't on the proxied object. Or, worse, it could also shadow
> > one of the proxied object's existing attributes.
> 
> That's a very weak argument, IMHO. It all depends on the
> naming of the attribute.

What name do you suggest that isn't cumbersome or awkward, and yet
doesn't present any risk of conflict with attributes of the proxied
object?

> We've used such an approach on our mxProxy object for years without
> any problems or naming conflicts so far:
> 
> http://www.egenix.com/products/python/mxBase/mxProxy/
> http://www.egenix.com/products/python/mxBase/mxProxy/doc/#_Toc162774452

Well, if some features of mxProxy are useful, perhaps it would be worth
integrating them in the stdlib.

Regards

Antoine.




From mal at egenix.com  Mon Jun 28 16:20:53 2010
From: mal at egenix.com (M.-A. Lemburg)
Date: Mon, 28 Jun 2010 16:20:53 +0200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
In-Reply-To: <20100628161018.15705b73@pitrou.net>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>	<4C25B918.8010307@canterbury.ac.nz>
	<4C25DE8A.1030209@egenix.com>	<20100626140031.6adff16e@pitrou.net>
	<4C2676C0.3000407@egenix.com>	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com>	<4C26912C.8010709@egenix.com>	<AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com>	<4C278847.5040600@egenix.com>
	<20100627200001.4821ede9@pitrou.net>	<4C28840D.5040703@egenix.com>
	<20100628143924.766c056d@pitrou.net>	<4C28A3B5.9000100@egenix.com>
	<20100628161018.15705b73@pitrou.net>
Message-ID: <4C28AFC5.4090309@egenix.com>

Antoine Pitrou wrote:
> On Mon, 28 Jun 2010 15:29:25 +0200
> "M.-A. Lemburg" <mal at egenix.com> wrote:
>> Antoine Pitrou wrote:
>>> On Mon, 28 Jun 2010 13:14:21 +0200
>>> "M.-A. Lemburg" <mal at egenix.com> wrote:
>>>>
>>>> BTW: I wonder why proxy objects don't provide a direct access to
>>>> the weakref object they are using. That would make keeping that
>>>> extra variable around unnecessary.
>>>
>>> Probably because the proxy would then have an additional attribute
>>> which isn't on the proxied object. Or, worse, it could also shadow
>>> one of the proxied object's existing attributes.
>>
>> That's a very weak argument, IMHO. It all depends on the
>> naming of the attribute.
> 
> What name do you suggest that isn't cumbersome or awkward, and yet
> doesn't present any risk of conflict with attributes of the proxied
> object?

If you want to play safe, use something like '__weakref_object__'.

In mxProxy, we simply reserved all methods and attributes that start
with 'proxy_' for use by the proxy object itself. That hasn't
caused a conflict so far.

>> We've used such an approach on our mxProxy object for years without
>> any problems or naming conflicts so far:
>>
>> http://www.egenix.com/products/python/mxBase/mxProxy/
>> http://www.egenix.com/products/python/mxBase/mxProxy/doc/#_Toc162774452
> 
> Well, if some features of mxProxy are useful, perhaps it would be worth
> integrating them in the stdlib.

We mainly use mxProxy for low-level access control to objects,
and as a way to implement a cleanup protocol for breaking
circular references early.

The weak reference feature was a later add-on and also serves
as an additional way to prevent creation of circular references.

All this was designed prior to Python implementing the
GC protocol which now implements something similar to the
cleanup protocol we have in mxProxy.

Unlike the standard Python weakref implementation, mxProxy doesn't
require changes to the proxy objects in order to create
a weak reference. It works for all objects.

I don't know why Fred used a different approach.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Jun 28 2010)
>>> Python/Zope Consulting and Support ...        http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ...             http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2010-07-19: EuroPython 2010, Birmingham, UK                20 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/


From guido at python.org  Tue Jun 29 01:09:55 2010
From: guido at python.org (Guido van Rossum)
Date: Mon, 28 Jun 2010 16:09:55 -0700
Subject: [Python-ideas] [Python-Dev] [ANN]: "newthreading" - an approach
	to simplified thread usage, and a path to getting rid of the GIL
In-Reply-To: <4C262D37.7020807@animats.com>
References: <4C259A25.1060705@animats.com> <4C2600B4.5020503@voidspace.org.uk>
	<AANLkTikpDB4FsFCESF2Ub0ZhXJiZyERZF6zLAjUovcr8@mail.gmail.com> 
	<4C262D37.7020807@animats.com>
Message-ID: <AANLkTinj9h4W_1kj8_7YyOO2aRlPJi7eljKzf1Idyhx7@mail.gmail.com>

I'm moving this thread to python-ideas, where it belongs.

I've looked at the implementation code (even stepped through it with
pdb!), read the sample/test code, and read the two papers on
animats.com fairly closely (they have a lot of overlap, and the memory
model described below seems copied verbatim from
http://www.animats.com/papers/languages/pythonconcurrency.html version
0.8).

Some reactions (trying to hide my responses to the details of the code):

- First of all, I'm very happy to see radical ideas proposed, even if
they are at present unrealistic. We need a big brainstorm to come up
with ideas from which an eventual solution to the multicore problem
might be chosen. (Jesse Noller's multiprocessing is another; Adam
Olsen's work yet another, at a different end of the spectrum.)

- The proposed new semantics (frozen objects, memory model,
auto-freezing of globals, enforcement of naming conventions) are
radically different from Python's current semantics. They will break
every 3rd party library in many more ways than Python 3. This is not
surprising given the goals of the proposal (and its roots in Adam
Olsen's work) but places a huge roadblock for acceptance. I see no
choice but to keep trying to come up with a compromise that is more
palatable and compatible without throwing away all the advantages. As
it now stands, the proposal might as well be a new and different
language.

- SynchronizedObject looks like a mixture of a Java synchronized class
(a non-standard concept in Java but easily understood as a class all
whose public methods are synchronized) and a condition variable (which
has the same semantics of releasing the lock while waiting but without
crawling the stack for other locks to release). It looks like the
examples showing off SynchronizedObject could be implemented just as
elegantly using a condition variable (and voluntary abstention from
using shared mutable objects).

- If the goal is to experiment with new control structures, I
recommend decoupling them from the memory model and frozen objects,
instead relying (as is traditional in Python) on programmer caution to
avoid races. This would make it much easier to see how programmers
respond to the new control structures.

- You could add the freeze() function for voluntary use, and you could
even add automatic wrapping of arguments and return values for certain
classes using a class decorator or a metaclass, but the performance
overhead makes this unlikely to win over many converts. I don't see
much use for the "whole program freezing" done by the current
prototype -- there are way too many backdoors in Python for the
prototype approach to be anywhere near foolproof, and if we want a
non-foolproof approach, voluntary constraint (and, in some cases,
voluntary, i.e. explicit, wrapping of modules or classes) would work
just as well.

- For a larger-scale experiment with the new memory model and semantic
restrictions (or would it be better to call them syntactic
restrictions? -- after all they are about statically detectable
properties like naming conventions) I recommend looking at PyPy, which
has as one of its explicitly stated project goals easy experimentation
with different object models.

- I'm sure I've forgotten something, but I wanted to keep my impressions fresh.

- Again, John, thanks for taking the time to come up with an
implementation of your idea!

--Guido

On Sat, Jun 26, 2010 at 9:39 AM, John Nagle <nagle at animats.com> wrote:
> On 6/26/2010 7:44 AM, Jesse Noller wrote:
>>
>> On Sat, Jun 26, 2010 at 9:29 AM, Michael Foord
>> <fuzzyman at voidspace.org.uk> ?wrote:
>>>
>>> On 26/06/2010 07:11, John Nagle wrote:
>>>>
>>>> We have just released a proof-of-concept implementation of a new
>>>> approach to thread management - "newthreading".
>
> ....
>
>>> The import * form is considered bad practise in *general* and
>>> should not be recommended unless there is a good reason.
>
> ? I agree. ?I just did that to make the examples cleaner.
>
>>> however the introduction of free-threading in Python has not been
>>> hampered by lack of synchronization primitives but by the
>>> difficulty of changing the interpreter without unduly impacting
>>> single threaded code.
>
> ? ?That's what I'm trying to address here.
>
>>> Providing an alternative garbage collection mechanism other than
>>> reference counting would be a more interesting first-step as far as
>>> I can see, as that removes the locking required around every access
>>> to an object (which currently touches the reference count).
>>> Introducing free-threading by *changing* the threading semantics
>>> (so you can't share non-frozen objects between threads) would not
>>> be acceptable. That comment is likely to be based on a
>>> misunderstanding of your future intentions though. :-)
>
> ? ?This work comes out of a discussion a few of us had at a restaurant
> in Palo Alto after a Stanford talk by the group at Facebook which
> is building a JIT compiler for PHP. ?We were discussing how to
> make threading both safe for the average programmer and efficient.
> Javascript and PHP don't have threads at all; Python has safe
> threading, but it's slow. ?C/C++/Java all have race condition
> problems, of course. ?The Facebook guy pointed out that you
> can't redefine a function dynamically in PHP, and they get
> a performance win in their JIT by exploiting this.
>
> ? ?I haven't gone into the memory model in enough detail in the
> technical paper. ?The memory model I envision for this has three
> memory zones:
>
> ? ?1. ?Shared fully-immutable objects: primarily strings, numbers,
> and tuples, all of whose elements are fully immutable. ?These can
> be shared without locking, and reclaimed by a concurrent garbage
> collector like Boehm's. ?They have no destructors, so finalization
> is not an issue.
>
> ? ?2. ?Local objects. ?These are managed as at present, and
> require no locking. ?These can either be thread-local, or local
> to a synchronized object. ?There are no links between local
> objects under different "ownership". ?Whether each thread and
> object has its own private heap, or whether there's a common heap with
> locks at the allocator is an implementation decision.
>
> ? ?3. ?Shared mutable objects: mostly synchronized objects, but
> also immutable objects like tuples which contain references
> to objects that aren't fully immutable. ?These are the high-overhead
> objects, and require locking during reference count updates, or
> atomic reference count operations if supported by the hardware.
> The general idea is to minimize the number of objects in this
> zone.
>
> ? ?The zone of an object is determined when the object is created,
> and never changes. ? This is relatively simple to implement.
> Tuples (and frozensets, frozendicts, etc.) are normally zone 2
> objects. ?Only "freeze" creates collections in zones 1 and 3.
> Synchronized objects are always created in zone 3.
> There are no difficult handoffs, where an object that was previously
> thread-local now has to be shared and has to acquire locks during
> the transition.
>
> ? ?Existing interlinked data structures, like parse trees and GUIs,
> are by default zone 2 objects, with the same semantics as at
> present. ?They can be placed inside a SynchronizedObject if
> desired, which makes them usable from multiple threads.
> That's optional; they're thread-local otherwise.
>
> ? ?The rationale behind "freezing" some of the language semantics
> when the program goes multi-thread comes from two sources -
> Adam Olsen's Safethread work, and the acceptance of the
> multiprocessing module. ?Olsen tried to retain all the dynamism of
> the language in a multithreaded environment, but locking all the
> underlying dictionaries was a boat-anchor on the whole system,
> and slowed things down so much that he abandoned the project.
> The Unladen Swallow documentation indicates that early thinking
> on the project was that Olsen's approach would allow getting
> rid of the GIL, but later notes indicate that no path to a
> GIL-free JIT system is currently in development.
>
> ? ?The multiprocessing module provides semantics similar to
> threading with "freezing". ?Data passed between processes is "frozen"
> by pickling. ?Processes can't modify each other's code. ?Restrictive
> though the multiprocessing module is, it appears to be useful.
> It is sometimes recommended as the Pythonic approach to multi-core CPUs.
> This is an indication that "freezing" is not unacceptable to the
> user community.
>
> ? ?Most of the real-world use cases for extreme dynamism
> involve events that happen during startup. ?Configuration files are
> read, modules are selectively included, functions are overridden, tables
> of references to functions are set up, regular expressions are compiled,
> and the code is brought into the appropriately configured state. ?Then
> the worker threads are started and the real work starts. The
> "newthreading" approach allows all that.
>
> ? ?After two decades of failed attempts remove the Global
> Interpreter Lock without making performance worse, it is perhaps
> time to take a harder look at scaleable threading semantics.
>
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?John Nagle
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Animats

-- 
--Guido van Rossum (python.org/~guido)


From solipsis at pitrou.net  Tue Jun 29 01:40:23 2010
From: solipsis at pitrou.net (Antoine Pitrou)
Date: Tue, 29 Jun 2010 01:40:23 +0200
Subject: [Python-ideas] [Python-Dev] [ANN]: "newthreading" - an approach
 to simplified thread usage, and a path to getting rid of the GIL
References: <4C259A25.1060705@animats.com> <4C2600B4.5020503@voidspace.org.uk>
	<AANLkTikpDB4FsFCESF2Ub0ZhXJiZyERZF6zLAjUovcr8@mail.gmail.com>
	<4C262D37.7020807@animats.com>
	<AANLkTinj9h4W_1kj8_7YyOO2aRlPJi7eljKzf1Idyhx7@mail.gmail.com>
Message-ID: <20100629014023.057a3d44@pitrou.net>

On Mon, 28 Jun 2010 16:09:55 -0700
Guido van Rossum <guido at python.org> wrote:
> I'm moving this thread to python-ideas, where it belongs.
[...]

For the record, I really think the solution to the "try to remove the
GIL" problem is to... try to remove it.
I believe it implies several preparatory steps:
- take full control of memory allocation
- on top of that, devise a full garbage collector (probably including a
  notion of external references such that existing ways of writing C
  extensions are still correct)
- then, do the the tedious, delicate grunt work of adding locking to
  critical structures without slowing them down (too much)

Trying to invent schemes to make multithreading easier to program with
is a nice endeavour in itself, but quite orthogonal IMO.

Regards

Antoine.




From fuzzyman at gmail.com  Tue Jun 29 01:54:11 2010
From: fuzzyman at gmail.com (Michael Foord)
Date: Tue, 29 Jun 2010 00:54:11 +0100
Subject: [Python-ideas] [Python-Dev] [ANN]: "newthreading" - an approach
	to simplified thread usage, and a path to getting rid of the GIL
In-Reply-To: <20100629014023.057a3d44@pitrou.net>
References: <4C259A25.1060705@animats.com> <4C2600B4.5020503@voidspace.org.uk>
	<AANLkTikpDB4FsFCESF2Ub0ZhXJiZyERZF6zLAjUovcr8@mail.gmail.com>
	<4C262D37.7020807@animats.com>
	<AANLkTinj9h4W_1kj8_7YyOO2aRlPJi7eljKzf1Idyhx7@mail.gmail.com>
	<20100629014023.057a3d44@pitrou.net>
Message-ID: <9BE39202-534C-4969-BD3C-6E94FF062384@gmail.com>



On 29 Jun 2010, at 00:40, Antoine Pitrou <solipsis at pitrou.net> wrote:

> On Mon, 28 Jun 2010 16:09:55 -0700
> Guido van Rossum <guido at python.org> wrote:
>> I'm moving this thread to python-ideas, where it belongs.
> [...]
> 
> For the record, I really think the solution to the "try to remove the
> GIL" problem is to... try to remove it.
> I believe it implies several preparatory steps:
> - take full control of memory allocation
> - on top of that, devise a full garbage collector (probably including a
>  notion of external references such that existing ways of writing C
>  extensions are still correct)
> - then, do the the tedious, delicate grunt work of adding locking to
>  critical structures without slowing them down (too much)
> 
> Trying to invent schemes to make multithreading easier to program with
> is a nice endeavour in itself, but quite orthogonal IMO.


Full agreement. Ironclad, a project to enable the use of Python C extensions with IronPython - which has a generational moving GC, uses a hybrid approach. It allows C extensions to use reference counting but manipulates the reference count so that it can only drop to zero once there are no references left on the IronPython side. There are complications with this approach, which Ironclad handles, but that would be much easier when we have control over the implementation (Ironclad doesn't change the IronPython implementation).

No link I'm afraid, sending from a mobile device.

Incidentally, Ironclad also 'fakes' the GIL as IronPython has no GIL. In theory this could cause problems for C extensions that aren't thread safe but that hasn't yet been a problem in production (Ironclad is mainly used with numpy).


Michael Foord





> 
> Regards
> 
> Antoine.
> 
> 
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> http://mail.python.org/mailman/listinfo/python-ideas


From greg.ewing at canterbury.ac.nz  Tue Jun 29 02:06:45 2010
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Tue, 29 Jun 2010 12:06:45 +1200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
In-Reply-To: <4C288820.2000707@egenix.com>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com>
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com>
	<AANLkTimrrQMiYmtbaSyRRuBISHIKzukyXyJ9sMxgk24j@mail.gmail.com>
	<4C288820.2000707@egenix.com>
Message-ID: <4C293915.9060002@canterbury.ac.nz>

M.-A. Lemburg wrote:
> Nick Coghlan wrote:
> 
>>When Python code calls into Pyrex/C code which then call back into
>>Python, I understand they insert dummy frames into the tracebacks to
>>make the call stack more complete.
> 
> I suppose they do this for
> better error reporting, right ?

Yes.

This is one reason I would like to be able to have traceback
objects without a corresponding frame. Having to create an
entire frame just to have somewhere to put the file name is
very annoying.

Also, being able to remove the whole frame from a traceback
object seems like a cleaner and more complete way to implement
what the OP wanted.

-- 
Greg



From greg.ewing at canterbury.ac.nz  Tue Jun 29 02:21:23 2010
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Tue, 29 Jun 2010 12:21:23 +1200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
In-Reply-To: <20100628161018.15705b73@pitrou.net>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com>
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com>
	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com>
	<4C26912C.8010709@egenix.com>
	<AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com>
	<4C278847.5040600@egenix.com> <20100627200001.4821ede9@pitrou.net>
	<4C28840D.5040703@egenix.com> <20100628143924.766c056d@pitrou.net>
	<4C28A3B5.9000100@egenix.com> <20100628161018.15705b73@pitrou.net>
Message-ID: <4C293C83.1050804@canterbury.ac.nz>

>>Antoine Pitrou wrote:
>>
>>>"M.-A. Lemburg" <mal at egenix.com> wrote:
>>>
>>>>BTW: I wonder why proxy objects don't provide a direct access to
>>>>the weakref object they are using.
>>>
>>>Probably because the proxy would then have an additional attribute
>>>which isn't on the proxied object.

This problem could be avoided by providing a function to
extract the proxied object.

-- 
Greg


From rhamph at gmail.com  Tue Jun 29 03:17:11 2010
From: rhamph at gmail.com (Adam Olsen)
Date: Mon, 28 Jun 2010 19:17:11 -0600
Subject: [Python-ideas] [Python-Dev] [ANN]: "newthreading" - an approach
	to simplified thread usage, and a path to getting rid of the GIL
In-Reply-To: <4C262D37.7020807@animats.com>
References: <4C259A25.1060705@animats.com> <4C2600B4.5020503@voidspace.org.uk>
	<AANLkTikpDB4FsFCESF2Ub0ZhXJiZyERZF6zLAjUovcr8@mail.gmail.com>
	<4C262D37.7020807@animats.com>
Message-ID: <AANLkTikFxdH-00aYhTDMmw0imHX7-XzYcY3SZ_zxVNhk@mail.gmail.com>

On Sat, Jun 26, 2010 at 10:39, John Nagle <nagle at animats.com> wrote:
> ? ?The rationale behind "freezing" some of the language semantics
> when the program goes multi-thread comes from two sources -
> Adam Olsen's Safethread work, and the acceptance of the
> multiprocessing module. ?Olsen tried to retain all the dynamism of
> the language in a multithreaded environment, but locking all the
> underlying dictionaries was a boat-anchor on the whole system,
> and slowed things down so much that he abandoned the project.
> The Unladen Swallow documentation indicates that early thinking
> on the project was that Olsen's approach would allow getting
> rid of the GIL, but later notes indicate that no path to a
> GIL-free JIT system is currently in development.

That's not true.  Refcounting was the boat-anchor, not dicts.  I was
unable to come up with a relatively simple replacement that scaled
fully.

The dicts shared as module globals and class dicts were a design
issue, but more of an ideological one: concurrency mentality says you
should only share immutable objects.  Python prefers ad-hoc design,
where you can do what you want so long as it's not particularly nasty.
 I was unable to find a way to have both, so I declared the python
mentality the winner.

The shareddict I came up with uses a read write lock, so that it's
safe when you do mutate and doesn't bottleneck when you don't mutate.
The only thing fancy was my method of checkpointing when doing a
readlock->writelock transition, but there's a hundred other ways to
accomplish that.


From rhamph at gmail.com  Tue Jun 29 03:23:20 2010
From: rhamph at gmail.com (Adam Olsen)
Date: Mon, 28 Jun 2010 19:23:20 -0600
Subject: [Python-ideas] [Python-Dev] [ANN]: "newthreading" - an approach
	to simplified thread usage, and a path to getting rid of the GIL
In-Reply-To: <20100629014023.057a3d44@pitrou.net>
References: <4C259A25.1060705@animats.com> <4C2600B4.5020503@voidspace.org.uk>
	<AANLkTikpDB4FsFCESF2Ub0ZhXJiZyERZF6zLAjUovcr8@mail.gmail.com>
	<4C262D37.7020807@animats.com>
	<AANLkTinj9h4W_1kj8_7YyOO2aRlPJi7eljKzf1Idyhx7@mail.gmail.com>
	<20100629014023.057a3d44@pitrou.net>
Message-ID: <AANLkTinjEywDDcUPxyz6QI-zH17iMG5YZwsuij1reMwm@mail.gmail.com>

On Mon, Jun 28, 2010 at 17:40, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Mon, 28 Jun 2010 16:09:55 -0700
> Guido van Rossum <guido at python.org> wrote:
>> I'm moving this thread to python-ideas, where it belongs.
> [...]
>
> For the record, I really think the solution to the "try to remove the
> GIL" problem is to... try to remove it.
> I believe it implies several preparatory steps:
> - take full control of memory allocation
> - on top of that, devise a full garbage collector (probably including a
> ?notion of external references such that existing ways of writing C
> ?extensions are still correct)
> - then, do the the tedious, delicate grunt work of adding locking to
> ?critical structures without slowing them down (too much)
>
> Trying to invent schemes to make multithreading easier to program with
> is a nice endeavour in itself, but quite orthogonal IMO.

+1.

Designing an API in C for a precise GC is tedious, and would probably
be very ugly, but it's entirely doable.  We simply need the will to go
through with it.

I can't say what the overhead would look like, but so long as it
scales well and it's a compile-time option it should find plenty of
users.


From tjreedy at udel.edu  Tue Jun 29 03:26:52 2010
From: tjreedy at udel.edu (Terry Reedy)
Date: Mon, 28 Jun 2010 21:26:52 -0400
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
In-Reply-To: <20100628143924.766c056d@pitrou.net>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>	<4C25B918.8010307@canterbury.ac.nz>
	<4C25DE8A.1030209@egenix.com>	<20100626140031.6adff16e@pitrou.net>
	<4C2676C0.3000407@egenix.com>	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com>	<4C26912C.8010709@egenix.com>	<AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com>	<4C278847.5040600@egenix.com>
	<20100627200001.4821ede9@pitrou.net>	<4C28840D.5040703@egenix.com>
	<20100628143924.766c056d@pitrou.net>
Message-ID: <i0bi4s$o5d$1@dough.gmane.org>

On 6/28/2010 8:39 AM, Antoine Pitrou wrote:

> However, as the OP argued, most often you need the traceback in order
> to display file names and line numbers, but you don't need the attached
> variables (locals and globals).

It then seems to me that one should extract the file name and line 
number info one wants to save before exiting the exception clause and 
let the traceback exception and traceback go on exit. Is a library 
function needed to make extraction easier?

Perhaps this "The reason for this [deletion on exit] is that with the 
traceback attached to them, exceptions will form a reference cycle with 
the stack frame, keeping all locals in that frame alive until the next 
garbage collection occurs." could be strengthened into a better warning 
that this "That means that you have to assign the exception to a 
different name if you want to be able to refer to it after the except 
clause. " may really, really not be a good idea.

-- 
Terry Jan Reedy



From ghazel at gmail.com  Tue Jun 29 03:34:39 2010
From: ghazel at gmail.com (ghazel at gmail.com)
Date: Mon, 28 Jun 2010 18:34:39 -0700
Subject: [Python-ideas] feature to make traceback objects usable without
	references to frame locals and globals
In-Reply-To: <i0bi4s$o5d$1@dough.gmane.org>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com> 
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com> 
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com> 
	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com> 
	<4C26912C.8010709@egenix.com>
	<AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com> 
	<4C278847.5040600@egenix.com> <20100627200001.4821ede9@pitrou.net> 
	<4C28840D.5040703@egenix.com> <20100628143924.766c056d@pitrou.net> 
	<i0bi4s$o5d$1@dough.gmane.org>
Message-ID: <AANLkTikqGLyAOZbweKEtWDgJHfnT6ju28qmVRrHp_3pH@mail.gmail.com>

On Mon, Jun 28, 2010 at 6:26 PM, Terry Reedy <tjreedy at udel.edu> wrote:
> On 6/28/2010 8:39 AM, Antoine Pitrou wrote:
>
>> However, as the OP argued, most often you need the traceback in order
>> to display file names and line numbers, but you don't need the attached
>> variables (locals and globals).
>
> It then seems to me that one should extract the file name and line number
> info one wants to save before exiting the exception clause and let the
> traceback exception and traceback go on exit. Is a library function needed
> to make extraction easier?

Unfortunately this is only half of the task. To re-raise the exception
with the traceback later, a real traceback object is needed. To my
knowledge there is no way to create a real traceback object from
Python given only file name and line numbers.

-Greg


From mark at qtrac.eu  Tue Jun 29 10:20:56 2010
From: mark at qtrac.eu (Mark Summerfield)
Date: Tue, 29 Jun 2010 09:20:56 +0100
Subject: [Python-ideas] Maybe allow br"" or rb"" e.g.,
	for bytes regexes in Py3?
Message-ID: <201006290920.56104.mark@qtrac.eu>

Hi,

Python 3 has two string prefixes r"" for raw strings and b"" for bytes.

So if you want to create a regex based on bytes as far as I can tell you
have to do something like this:

    FONTNAME_RE = re.compile(r"/FontName\s+/(\S+)".encode("ascii"))
    # or
    FONTNAME_RE = re.compile(b"/FontName\\s+/(\\S+)")

I think it would be much nicer if one could write:

    FONTNAME_RE = re.compile(br"/FontName\s+/(\S+)")
    # or
    FONTNAME_RE = re.compile(rb"/FontName\s+/(\S+)")

I _slightly_ prefer rb"" to br"" but either would be great:-)

Why would you want a bytes regex?

In my case I am reading PostScript files and PostScript .pfa font files
so that I can embed the latter into the former. But I don't know what
encoding these files use beyond the fact that it is ASCII or some ASCII
superset like Latin1. So in true Python style I don't assume: instead I
read the files as bytes and do all my processing using bytes, at no
point decoding since I only ever insert ASCII characters. I don't think
this is a rare example: with Python 3's clean separation between strings
& bytes (a major advance IMO), I think there will often be cases where
all the processing is done using bytes.

-- 
Mark Summerfield, Qtrac Ltd, www.qtrac.eu
    C++, Python, Qt, PyQt - training and consultancy
        "Advanced Qt Programming" - ISBN 0321635906
            http://www.qtrac.eu/aqpbook.html

                I ordered a Dell netbook with Ubuntu...
       I got no OS, no apology, no solution, & no refund (so far)
               http://www.qtrac.eu/dont-buy-dell.html


From ncoghlan at gmail.com  Tue Jun 29 14:05:07 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 29 Jun 2010 22:05:07 +1000
Subject: [Python-ideas] [Python-Dev] [ANN]: "newthreading" - an approach
	to simplified thread usage, and a path to getting rid of the GIL
In-Reply-To: <9BE39202-534C-4969-BD3C-6E94FF062384@gmail.com>
References: <4C259A25.1060705@animats.com> <4C2600B4.5020503@voidspace.org.uk>
	<AANLkTikpDB4FsFCESF2Ub0ZhXJiZyERZF6zLAjUovcr8@mail.gmail.com>
	<4C262D37.7020807@animats.com>
	<AANLkTinj9h4W_1kj8_7YyOO2aRlPJi7eljKzf1Idyhx7@mail.gmail.com>
	<20100629014023.057a3d44@pitrou.net>
	<9BE39202-534C-4969-BD3C-6E94FF062384@gmail.com>
Message-ID: <AANLkTikHmbj8U3p9RcOZMrhh_x0_wMojpzafQhHK24Yr@mail.gmail.com>

On Tue, Jun 29, 2010 at 9:54 AM, Michael Foord <fuzzyman at gmail.com> wrote:
> Full agreement. Ironclad, a project to enable the use of Python C extensions with IronPython - which has a generational moving GC, uses a hybrid approach. It allows C extensions to use reference counting but manipulates the reference count so that it can only drop to zero once there are no references left on the IronPython side. There are complications with this approach, which Ironclad handles, but that would be much easier when we have control over the implementation (Ironclad doesn't change the IronPython implementation).
>
> No link I'm afraid, sending from a mobile device.
>
> Incidentally, Ironclad also 'fakes' the GIL as IronPython has no GIL. In theory this could cause problems for C extensions that aren't thread safe but that hasn't yet been a problem in production (Ironclad is mainly used with numpy).

How much do you know about Resolver's licensing setup for Ironclad?
Combining Ironclad with a Boehm GC enabled PyMalloc mechanism might be
a fruitful avenue of research on the way to a free-threading CPython
implementation. Losing deterministic refcounting for pure Python code
is no longer as big an issue as it once was, as many of the tricks it
used to enable are now better covered by context managers.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia


From ncoghlan at gmail.com  Tue Jun 29 14:12:18 2010
From: ncoghlan at gmail.com (Nick Coghlan)
Date: Tue, 29 Jun 2010 22:12:18 +1000
Subject: [Python-ideas] Maybe allow br"" or rb"" e.g.,
	for bytes regexes 	in Py3?
In-Reply-To: <201006290920.56104.mark@qtrac.eu>
References: <201006290920.56104.mark@qtrac.eu>
Message-ID: <AANLkTikgQJoiLM6bP8cNFfJ9ZsVNMYpw855zhU21k3e0@mail.gmail.com>

On Tue, Jun 29, 2010 at 6:20 PM, Mark Summerfield <mark at qtrac.eu> wrote:
> ? ?FONTNAME_RE = re.compile(br"/FontName\s+/(\S+)")
> ? ?# or
> ? ?FONTNAME_RE = re.compile(rb"/FontName\s+/(\S+)")
>
> I _slightly_ prefer rb"" to br"" but either would be great:-)

According to my local build, we already picked 'br':

Python 3.2a0 (py3k:81943, Jun 12 2010, 22:02:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> "\t"
'\t'
>>> r"\t"
'\\t'
>>> b"\t"
b'\t'
>>> br"\t"
b'\\t'

I installed the system python3 to confirm that this isn't new:

Python 3.1.2 (r312:79147, Apr 15 2010, 15:35:48)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> br"\t"
b'\\t'

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia


From mark at qtrac.eu  Tue Jun 29 14:34:57 2010
From: mark at qtrac.eu (Mark Summerfield)
Date: Tue, 29 Jun 2010 13:34:57 +0100
Subject: [Python-ideas] Maybe allow br"" or rb"" e.g.,
	for bytes regexes in Py3?
In-Reply-To: <AANLkTikgQJoiLM6bP8cNFfJ9ZsVNMYpw855zhU21k3e0@mail.gmail.com>
References: <201006290920.56104.mark@qtrac.eu>
	<AANLkTikgQJoiLM6bP8cNFfJ9ZsVNMYpw855zhU21k3e0@mail.gmail.com>
Message-ID: <201006291334.57667.mark@qtrac.eu>

You're right, so I've raised it as a doc bug:
http://bugs.python.org/issue9114

On 2010-06-29, Nick Coghlan wrote:
> On Tue, Jun 29, 2010 at 6:20 PM, Mark Summerfield <mark at qtrac.eu> wrote:
> >    FONTNAME_RE = re.compile(br"/FontName\s+/(\S+)")
> >    # or
> >    FONTNAME_RE = re.compile(rb"/FontName\s+/(\S+)")
> > 
> > I _slightly_ prefer rb"" to br"" but either would be great:-)
> 
> According to my local build, we already picked 'br':
> 
> Python 3.2a0 (py3k:81943, Jun 12 2010, 22:02:56)
> [GCC 4.4.3] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> 
> >>> "\t"
> 
> '\t'
> 
> >>> r"\t"
> 
> '\\t'
> 
> >>> b"\t"
> 
> b'\t'
> 
> >>> br"\t"
> 
> b'\\t'
> 
> I installed the system python3 to confirm that this isn't new:
> 
> Python 3.1.2 (r312:79147, Apr 15 2010, 15:35:48)
> [GCC 4.4.3] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> 
> >>> br"\t"
> 
> b'\\t'
> 
> Cheers,
> Nick.


-- 
Mark Summerfield, Qtrac Ltd, www.qtrac.eu
    C++, Python, Qt, PyQt - training and consultancy
        "Advanced Qt Programming" - ISBN 0321635906
            http://www.qtrac.eu/aqpbook.html

                I ordered a Dell netbook with Ubuntu...
       I got no OS, no apology, no solution, & no refund (so far)
               http://www.qtrac.eu/dont-buy-dell.html


From stephen at xemacs.org  Tue Jun 29 14:34:58 2010
From: stephen at xemacs.org (Stephen J. Turnbull)
Date: Tue, 29 Jun 2010 21:34:58 +0900
Subject: [Python-ideas]  Maybe allow br"" or rb"" e.g.,
	for bytes regexes in Py3?
In-Reply-To: <201006290920.56104.mark@qtrac.eu>
References: <201006290920.56104.mark@qtrac.eu>
Message-ID: <87zkyeaufh.fsf@uwakimon.sk.tsukuba.ac.jp>

Mark Summerfield writes:

 > Python 3 has two string prefixes r"" for raw strings and b"" for
 > bytes.

And you *can* combine them, but it needs to be in the right order
(although I'm not sure that's intentional):

steve at uwakimon ~ $ python3.1
Python 3.1.2 (release31-maint, May 12 2010, 20:15:06) 
[GCC 4.3.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> rb"a\rc"
  File "<stdin>", line 1
    rb"a\rc"
           ^
SyntaxError: invalid syntax
>>> br"abc"
b'abc'
>>> br"a\rc"
b'a\\rc'
>>> 

Watch out for that time machine!


From fuzzyman at gmail.com  Tue Jun 29 14:55:40 2010
From: fuzzyman at gmail.com (Michael Foord)
Date: Tue, 29 Jun 2010 13:55:40 +0100
Subject: [Python-ideas] [Python-Dev] [ANN]: "newthreading" - an approach
	to simplified thread usage, and a path to getting rid of the GIL
In-Reply-To: <AANLkTikHmbj8U3p9RcOZMrhh_x0_wMojpzafQhHK24Yr@mail.gmail.com>
References: <4C259A25.1060705@animats.com> <4C2600B4.5020503@voidspace.org.uk>
	<AANLkTikpDB4FsFCESF2Ub0ZhXJiZyERZF6zLAjUovcr8@mail.gmail.com>
	<4C262D37.7020807@animats.com>
	<AANLkTinj9h4W_1kj8_7YyOO2aRlPJi7eljKzf1Idyhx7@mail.gmail.com>
	<20100629014023.057a3d44@pitrou.net>
	<9BE39202-534C-4969-BD3C-6E94FF062384@gmail.com>
	<AANLkTikHmbj8U3p9RcOZMrhh_x0_wMojpzafQhHK24Yr@mail.gmail.com>
Message-ID: <AANLkTinw3ZTfwdUjjD5t5oVmLa-QCPihrJi04EunR5Xl@mail.gmail.com>

On 29 June 2010 13:05, Nick Coghlan <ncoghlan at gmail.com> wrote:

> On Tue, Jun 29, 2010 at 9:54 AM, Michael Foord <fuzzyman at gmail.com> wrote:
> > Full agreement. Ironclad, a project to enable the use of Python C
> extensions with IronPython - which has a generational moving GC, uses a
> hybrid approach. It allows C extensions to use reference counting but
> manipulates the reference count so that it can only drop to zero once there
> are no references left on the IronPython side. There are complications with
> this approach, which Ironclad handles, but that would be much easier when we
> have control over the implementation (Ironclad doesn't change the IronPython
> implementation).
> >
> > No link I'm afraid, sending from a mobile device.
> >
> > Incidentally, Ironclad also 'fakes' the GIL as IronPython has no GIL. In
> theory this could cause problems for C extensions that aren't thread safe
> but that hasn't yet been a problem in production (Ironclad is mainly used
> with numpy).
>
> How much do you know about Resolver's licensing setup for Ironclad?
>


http://www.resolversystems.com/products/ironclad/

Ironclad is MIT licensed, but it is *very* tightly coupled to IronPython and
.NET (it works primarily through the .NET FFI and uses a fair bit of C#). It
may well be useful for inspiration, but I don't know how re-usable it is
likely to be.



> Combining Ironclad with a Boehm GC enabled PyMalloc mechanism might be
>


Boehm is a conservative collector, so whilst it may well be a "good first
step" it can leak memory like a sieve... Mono has always had this problem
and is finally getting rid of its conservative collector. The PyPy guys have
experience in this area.


> a fruitful avenue of research on the way to a free-threading CPython
> implementation. Losing deterministic refcounting for pure Python code
> is no longer as big an issue as it once was, as many of the tricks it
> used to enable are now better covered by context managers.
>
>
Right, and *most* of the alternative implementations are not reference
counted - so relying on reference counting semantics has been discouraged by
users of these platforms for a while now.

All the best,

Michael



> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
>



-- 
http://www.voidspace.org.uk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20100629/4cb04e1c/attachment.html>

From greg.ewing at canterbury.ac.nz  Wed Jun 30 02:50:35 2010
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 30 Jun 2010 12:50:35 +1200
Subject: [Python-ideas] feature to make traceback objects usable without
 references to frame locals and globals
In-Reply-To: <i0bi4s$o5d$1@dough.gmane.org>
References: <AANLkTimIdJDCuOw9n_zu3C7NpwL2rrZr7za16ZsVrVJh@mail.gmail.com>
	<4C25B918.8010307@canterbury.ac.nz> <4C25DE8A.1030209@egenix.com>
	<20100626140031.6adff16e@pitrou.net> <4C2676C0.3000407@egenix.com>
	<AANLkTinjp-S_dVAK-vWt2N7lkaQ75nDTCez2F9hYJNX8@mail.gmail.com>
	<4C26912C.8010709@egenix.com>
	<AANLkTiky-c8Om6axca1Dw_OL2PigSexYJZ2F1QhzrCcO@mail.gmail.com>
	<4C278847.5040600@egenix.com> <20100627200001.4821ede9@pitrou.net>
	<4C28840D.5040703@egenix.com> <20100628143924.766c056d@pitrou.net>
	<i0bi4s$o5d$1@dough.gmane.org>
Message-ID: <4C2A94DB.5070902@canterbury.ac.nz>

Terry Reedy wrote:

> It then seems to me that one should extract the file name and line 
> number info one wants to save before exiting the exception clause and 
> let the traceback exception and traceback go on exit. Is a library 
> function needed to make extraction easier?

That would require building your own custom traceback
structure that would be incompatible with any of the
standard functions available for formatting and printing
tracebacks.

-- 
Greg


From greg.ewing at canterbury.ac.nz  Wed Jun 30 03:07:44 2010
From: greg.ewing at canterbury.ac.nz (Greg Ewing)
Date: Wed, 30 Jun 2010 13:07:44 +1200
Subject: [Python-ideas] Maybe allow br"" or rb"" e.g.,
 for bytes regexes in Py3?
In-Reply-To: <AANLkTikgQJoiLM6bP8cNFfJ9ZsVNMYpw855zhU21k3e0@mail.gmail.com>
References: <201006290920.56104.mark@qtrac.eu>
	<AANLkTikgQJoiLM6bP8cNFfJ9ZsVNMYpw855zhU21k3e0@mail.gmail.com>
Message-ID: <4C2A98E0.2010504@canterbury.ac.nz>

Nick Coghlan wrote:

> According to my local build, we already picked 'br':

Wouldn't "raw bytes" sound better than "bytes raw"?
Or do the Dutch say it differently? :-)

-- 
Greg


From guido at python.org  Wed Jun 30 03:13:02 2010
From: guido at python.org (Guido van Rossum)
Date: Tue, 29 Jun 2010 18:13:02 -0700
Subject: [Python-ideas] Maybe allow br"" or rb"" e.g.,
	for bytes regexes 	in Py3?
In-Reply-To: <4C2A98E0.2010504@canterbury.ac.nz>
References: <201006290920.56104.mark@qtrac.eu>
	<AANLkTikgQJoiLM6bP8cNFfJ9ZsVNMYpw855zhU21k3e0@mail.gmail.com> 
	<4C2A98E0.2010504@canterbury.ac.nz>
Message-ID: <AANLkTikEIQbkRmVI-rW7BF-mdoJ_eEiYzXB7NUISc2HS@mail.gmail.com>

On Tue, Jun 29, 2010 at 6:07 PM, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
> Nick Coghlan wrote:
>
>> According to my local build, we already picked 'br':
>
> Wouldn't "raw bytes" sound better than "bytes raw"?
> Or do the Dutch say it differently? :-)

I can pronounce "brrrrr" but I can't say "rrrrrb". :-)

-- 
--Guido van Rossum (python.org/~guido)


From python at mrabarnett.plus.com  Wed Jun 30 04:04:23 2010
From: python at mrabarnett.plus.com (MRAB)
Date: Wed, 30 Jun 2010 03:04:23 +0100
Subject: [Python-ideas] Maybe allow br"" or rb"" e.g.,
 for bytes regexes 	in Py3?
In-Reply-To: <AANLkTikEIQbkRmVI-rW7BF-mdoJ_eEiYzXB7NUISc2HS@mail.gmail.com>
References: <201006290920.56104.mark@qtrac.eu>	<AANLkTikgQJoiLM6bP8cNFfJ9ZsVNMYpw855zhU21k3e0@mail.gmail.com>
	<4C2A98E0.2010504@canterbury.ac.nz>
	<AANLkTikEIQbkRmVI-rW7BF-mdoJ_eEiYzXB7NUISc2HS@mail.gmail.com>
Message-ID: <4C2AA627.8040502@mrabarnett.plus.com>

Guido van Rossum wrote:
> On Tue, Jun 29, 2010 at 6:07 PM, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:
>> Nick Coghlan wrote:
>>
>>> According to my local build, we already picked 'br':
>> Wouldn't "raw bytes" sound better than "bytes raw"?
>> Or do the Dutch say it differently? :-)
> 
> I can pronounce "brrrrr" but I can't say "rrrrrb". :-)
> 
And, of course, Python 2 has 'ur', but not 'ru'.


From tjreedy at udel.edu  Wed Jun 30 19:33:27 2010
From: tjreedy at udel.edu (Terry Reedy)
Date: Wed, 30 Jun 2010 13:33:27 -0400
Subject: [Python-ideas] Maybe allow br"" or rb"" e.g.,
	for bytes regexes  in Py3?
In-Reply-To: <4C2AA627.8040502@mrabarnett.plus.com>
References: <201006290920.56104.mark@qtrac.eu>	<AANLkTikgQJoiLM6bP8cNFfJ9ZsVNMYpw855zhU21k3e0@mail.gmail.com>	<4C2A98E0.2010504@canterbury.ac.nz>	<AANLkTikEIQbkRmVI-rW7BF-mdoJ_eEiYzXB7NUISc2HS@mail.gmail.com>
	<4C2AA627.8040502@mrabarnett.plus.com>
Message-ID: <i0fv56$5ig$1@dough.gmane.org>

On 6/29/2010 10:04 PM, MRAB wrote:
> Guido van Rossum wrote:
>> On Tue, Jun 29, 2010 at 6:07 PM, Greg Ewing
>> <greg.ewing at canterbury.ac.nz> wrote:
>>> Nick Coghlan wrote:
>>>
>>>> According to my local build, we already picked 'br':
>>> Wouldn't "raw bytes" sound better than "bytes raw"?
>>> Or do the Dutch say it differently? :-)
>>
>> I can pronounce "brrrrr" but I can't say "rrrrrb". :-)
>>
> And, of course, Python 2 has 'ur', but not 'ru'.

Even though most say or think 'raw unicode' rather than 'unicode raw'. 
But ur and br strike me as logically correct. In both Py2 and Py3, 
string literals are str literals. The r prefix disables most of the 
cooking of the literal. The u and b prefixes are effectively 
abbreviations for unicode() and bytes() calls on, I presume, the buffer 
part of a partially formed str object. In other words, br'abc' has the 
same effect as bytes(r'abc') but is easier to write and, I presume, 
faster to compute.

It it easy for people who only use ascii chars in Python code to forget 
that Python3 code is now actually a sequence of unicode chars rather 
than of (extended) ascii chars.

-- 
Terry Jan Reedy