2012/3/27 guido.van.rossum <python-checkins(a)python.org>:
> changeset: 4152:b9f43fe69691
> user: Guido van Rossum <guido(a)google.com>
> date: Mon Mar 26 20:35:14 2012 -0700
> Approve PEP 411.
> -Status: Draft
> +Status: Approved
The pep0 module doesn't accept the "Approved" status. I suppose that
you mean "Accepted" and so changed the status. If not, please revert
my change and fix the pep0 module.
My impression is that the original reason for PendingDeprecationWarning
versus DeprecationWarning was to be off by default until the last
release before removal. But having DeprecationWarnings on by default was
found to be too obnoxious and it too is off by default. So do we still
need PendingDeprecationWarnings? My impression is that it is mostly not
used, as it is a nuisance to remember to change from one to the other.
The deprecation message can always indicate the planned removal time. I
searched the Developer's Guide for both deprecation and
DeprecationWarning and found nothing.
Terry Jan Reedy
Here's another try, mainly with default browser font size, more contrast and
collapsible sidebar again:
I've also added a little questionable gimmick to the sidebar (when you collapse
it and expand it again, the content is shown at your current scroll location).
PEP 411 -- Provisional packages in the Python standard library
Has been updated with all accumulated feedback from list discussions.
Here it is: http://www.python.org/dev/peps/pep-0411/ (the text is also
pasted in the bottom of this email).
The PEP received mostly positive feedback. The only undecided point is
where to specify that the package is provisional. Currently the PEP
mandates to specify it in the documentation and in the docstring.
Other suggestions were to put it in the code, either as a
__provisional__ attribute on the module, or collect all such modules
in a single sys.provisional list.
According to http://blog.python.org/2012/03/2012-language-summit-report.html,
the PEP was discussed in the language summit and overall viewed
positively, although no final decision has been reached.
ISTM a decision needs to be taken, which is why I request
pronouncement, with a recommendation on the requirement the PEP should
make of provisional modules (process details).
Title: Provisional packages in the Python standard library
Author: Nick Coghlan <ncoghlan(a)gmail.com>,
Eli Bendersky <eliben(a)gmail.com>
The process of including a new package into the Python standard library is
hindered by the API lock-in and promise of backward compatibility implied by
a package being formally part of Python. This PEP describes a methodology
for marking a standard library package "provisional" for the period of a single
minor release. A provisional package may have its API modified prior to
"graduating" into a "stable" state. On one hand, this state provides the
package with the benefits of being formally part of the Python distribution.
On the other hand, the core development team explicitly states that no promises
are made with regards to the the stability of the package's API, which may
change for the next release. While it is considered an unlikely outcome,
such packages may even be removed from the standard library without a
deprecation period if the concerns regarding their API or maintenance prove
Proposal - a documented provisional state
Whenever the Python core development team decides that a new package should be
included into the standard library, but isn't entirely sure about whether the
package's API is optimal, the package can be included and marked as
In the next minor release, the package may either be "graduated" into a normal
"stable" state in the standard library, remain in provisional state, or be
rejected and removed entirely from the Python source tree. If the package ends
up graduating into the stable state after being provisional, its API may
be changed according to accumulated feedback. The core development team
explicitly makes no guarantees about API stability and backward compatibility
of provisional packages.
Marking a package provisional
A package will be marked provisional by a notice in its documentation page and
its docstring. The following paragraph will be added as a note at the top of
the documentation page:
The <X> package has been included in the standard library on a
provisional basis. Backwards incompatible changes (up to and including
removal of the package) may occur if deemed necessary by the core
The phrase "provisional basis" will then be a link to the glossary term
"provisional package", defined as:
A provisional package is one which has been deliberately excluded from the
standard library's normal backwards compatibility guarantees. While major
changes to such packages are not expected, as long as they are marked
provisional, backwards incompatible changes (up to and including removal of
the package) may occur if deemed necessary by core developers. Such changes
will not be made gratuitously - they will occur only if serious flaws are
uncovered that were missed prior to the inclusion of the package.
This process allows the standard library to continue to evolve over time,
without locking in problematic design errors for extended periods of time.
See PEP 411 for more details.
The following will be added to the start of the packages's docstring:
The API of this package is currently provisional. Refer to the
documentation for details.
Moving a package from the provisional to the stable state simply implies
removing these notes from its documentation page and docstring.
Which packages should go through the provisional state
We expect most packages proposed for addition into the Python standard library
to go through a minor release in the provisional state. There may, however,
be some exceptions, such as packages that use a pre-defined API (for example
``lzma``, which generally follows the API of the existing ``bz2`` package),
or packages with an API that has wide acceptance in the Python development
In any case, packages that are proposed to be added to the standard library,
whether via the provisional state or directly, must fulfill the acceptance
conditions set by PEP 2.
Criteria for "graduation"
In principle, most provisional packages should eventually graduate to the
stable standard library. Some reasons for not graduating are:
* The package may prove to be unstable or fragile, without sufficient developer
support to maintain it.
* A much better alternative package may be found during the preview release.
Essentially, the decision will be made by the core developers on a per-case
basis. The point to emphasize here is that a package's inclusion in the
standard library as "provisional" in some release does not guarantee it will
continue being part of Python in the next release.
Benefits for the core development team
Currently, the core developers are really reluctant to add new interfaces to
the standard library. This is because as soon as they're published in a
release, API design mistakes get locked in due to backward compatibility
By gating all major API additions through some kind of a provisional mechanism
for a full release, we get one full release cycle of community feedback
before we lock in the APIs with our standard backward compatibility guarantee.
We can also start integrating provisional packages with the rest of the standard
library early, so long as we make it clear to packagers that the provisional
packages should not be considered optional. The only difference between
provisional APIs and the rest of the standard library is that provisional APIs
are explicitly exempted from the usual backward compatibility guarantees.
Benefits for end users
For future end users, the broadest benefit lies in a better "out-of-the-box"
experience - rather than being told "oh, the standard library tools for task X
are horrible, download this 3rd party library instead", those superior tools
are more likely to be just be an import away.
For environments where developers are required to conduct due diligence on
their upstream dependencies (severely harming the cost-effectiveness of, or
even ruling out entirely, much of the material on PyPI), the key benefit lies
in ensuring that all packages in the provisional state are clearly under
python-dev's aegis from at least the following perspectives:
* Licensing: Redistributed by the PSF under a Contributor Licensing Agreement.
* Documentation: The documentation of the package is published and organized via
the standard Python documentation tools (i.e. ReST source, output generated
with Sphinx and published on http://docs.python.org).
* Testing: The package test suites are run on the python.org buildbot fleet
and results published via http://www.python.org/dev/buildbot.
* Issue management: Bugs and feature requests are handled on
* Source control: The master repository for the software is published
Candidates for provisional inclusion into the standard library
For Python 3.3, there are a number of clear current candidates:
* ``regex`` (http://pypi.python.org/pypi/regex) - approved by Guido [#]_.
* ``daemon`` (PEP 3143)
* ``ipaddr`` (PEP 3144)
Other possible future use cases include:
* Improved HTTP modules (e.g. ``requests``)
* HTML 5 parsing support (e.g. ``html5lib``)
* Improved URL/URI/IRI parsing
* A standard image API (PEP 368)
* Improved encapsulation of import state (PEP 406)
* Standard event loop API (PEP 3153)
* A binary version of WSGI for Python 3 (e.g. PEP 444)
* Generic function support (e.g. ``simplegeneric``)
Rejected alternatives and variations
See PEP 408.
.. [#] http://mail.python.org/pipermail/python-dev/2012-January/115962.html
This document has been placed in the public domain.
I added two functions to the time module in Python 3.3: wallclock()
and monotonic(). I'm unable to explain the difference between these
two functions, even if I wrote them :-) wallclock() is suppose to be
more accurate than time() but has an unspecified starting point.
monotonic() is similar except that it is monotonic: it cannot go
backward. monotonic() may not be available or fail whereas wallclock()
is available/work, but I think that the two functions are redundant.
I prefer to keep only monotonic() because it is not affected by system
clock update and should help to fix issues on NTP update in functions
implementing a timeout.
What do you think?
monotonic() has 3 implementations:
* Windows: QueryPerformanceCounter() with QueryPerformanceFrequency()
* Mac OS X: mach_absolute_time() with mach_timebase_info()
* UNIX: clock_gettime(CLOCK_MONOTONIC_RAW) or clock_gettime(CLOCK_MONOTONIC)
wallclock() has 3 implementations:
* Windows: QueryPerformanceCounter() with QueryPerformanceFrequency(),
with a fallback to GetSystemTimeAsFileTime() if
* UNIX: clock_gettime(CLOCK_MONOTONIC_RAW),
clock_gettime(CLOCK_MONOTONIC) or clock_gettime(CLOCK_REALTIME), with
a fallback to gettimeofday() if clock_gettime(*) failed
* Otherwise: gettimeofday()
(wallclock should also use mach_absolute_time() on Mac OS X)
>> changeset: 75850:7355550d5357
>> user: Stefan Krah <skrah(a)bytereef.org>
>> date: Wed Mar 21 18:25:23 2012 +0100
>> Issue #7652: Integrate the decimal floating point libmpdec library to speed
>> up the decimal module. Performance gains of the new C implementation are
>> between 12x and 80x, depending on the application.
Congrats Stefan! And thanks for the huge chunk of code.
On Mon, 26 Mar 2012 22:53:37 +0200
victor.stinner <python-checkins(a)python.org> wrote:
> changeset: 75960:566527ace50b
> user: Victor Stinner <victor.stinner(a)gmail.com>
> date: Mon Mar 26 22:53:14 2012 +0200
> Fix time.steady(strict=True): don't use CLOCK_REALTIME
Victor, could we have a PEP on all this?
I think everyone has lost track of what you are trying to do with these
On Mar 23, 2012 3:53 PM, "Carl Meyer" <carl(a)oddbird.net> wrote:
> Hi PJ,
> On 03/23/2012 12:35 PM, PJ Eby wrote:
> > AFAICT, virtualenvs are overkill for most development anyway. If you're
> > not using distutils except to install dependencies, then configure
> > distutils to install scripts and libraries to the same directory, and
> > then do all your development in that directory. Presto! You now have a
> > cross-platform "virtualenv". Want the scripts on your path? Add that
> > directory to your path... or if on Windows, don't bother, since the
> > current directory is usually on the path. (In fact, if you're only
> > using easy_install to install your dependencies, you don't even need to
> > edit the distutils configuration, just use "-md targetdir".)
> Creating and using a virtualenv is, in practice, _easier_ than any of
> those alternatives,
Really? As I said, I've never seen the need to try, since just installing
stuff to a directory on PYTHONPATH seems quite easy enough for me.
> that the "isolation from system site-packages" feature is quite popular
> (the outpouring of gratitude when virtualenv went isolated-by-default a
> few months ago was astonishing), and AFAIK none of your alternative
> proposals support that at all.
What is this isolation for, exactly? If you don't want site-packages on
your path, why not use python -S?
(Sure, nobody knows about these things, but surely that's a documentation
problem, not a tooling problem.)
Don't get me wrong, I don't have any deep objection to virtualenvs, I've
just never seen the *point* (outside of the scenarios I mentioned), and
thus don't see what great advantage will be had by rearranging layouts to
make them shareable across platforms, when "throw stuff in a directory"
seems perfectly serviceable for that use case already. Tools that *don't*
support "just throw it in a directory" as a deployment option are IMO
unpythonic -- practicality beats purity, after all. ;-)
PEP 393 (Flexible String Representation) is, without doubt, one of the
pearls of the Python 3.3. In addition to reducing memory consumption, it
also often leads to a corresponding increase in speed. In particular,
the string encoding now in 1.5-3 times faster.
But decoding is not so good. Here are the results of measuring the
performance of the decoding of the 1000-character string consisting of
characters from different ranges of the Unicode, for three versions of
Python -- 2.7.3rc2, 3.2.3rc2+ and 3.3.0a1+. Little-endian 32-bit i686
builds, gcc 4.4.
encoding string 2.7 3.2 3.3
ascii " " * 1000 5.4 5.3 1.2
latin1 " " * 1000 1.8 1.7 1.3
latin1 "\u0080" * 1000 1.7 1.6 1.0
utf-8 " " * 1000 6.7 2.4 2.1
utf-8 "\u0080" * 1000 12.2 11.0 13.0
utf-8 "\u0100" * 1000 12.2 11.1 13.6
utf-8 "\u0800" * 1000 14.7 14.4 17.2
utf-8 "\u8000" * 1000 13.9 13.3 17.1
utf-8 "\U00010000" * 1000 17.3 17.5 21.5
utf-16le " " * 1000 5.5 2.9 6.5
utf-16le "\u0080" * 1000 5.5 2.9 7.4
utf-16le "\u0100" * 1000 5.5 2.9 8.9
utf-16le "\u0800" * 1000 5.5 2.9 8.9
utf-16le "\u8000" * 1000 5.5 7.5 21.3
utf-16le "\U00010000" * 1000 9.6 12.9 30.1
utf-16be " " * 1000 5.5 3.0 9.0
utf-16be "\u0080" * 1000 5.5 3.1 9.8
utf-16be "\u0100" * 1000 5.5 3.1 10.4
utf-16be "\u0800" * 1000 5.5 3.1 10.4
utf-16be "\u8000" * 1000 5.5 6.6 21.2
utf-16be "\U00010000" * 1000 9.6 11.2 28.9
utf-32le " " * 1000 10.2 10.4 15.1
utf-32le "\u0080" * 1000 10.0 10.4 16.5
utf-32le "\u0100" * 1000 10.0 10.4 19.8
utf-32le "\u0800" * 1000 10.0 10.4 19.8
utf-32le "\u8000" * 1000 10.1 10.4 19.8
utf-32le "\U00010000" * 1000 11.7 11.3 20.2
utf-32be " " * 1000 10.0 11.2 15.0
utf-32be "\u0080" * 1000 10.1 11.2 16.4
utf-32be "\u0100" * 1000 10.0 11.2 19.7
utf-32be "\u0800" * 1000 10.1 11.2 19.7
utf-32be "\u8000" * 1000 10.1 11.2 19.7
utf-32be "\U00010000" * 1000 11.7 11.2 20.2
The first oddity in that the characters from the second half of the
Latin1 table decoded faster than the characters from the first half. I
think that the characters from the first half of the table must be
decoded as quickly.
The second sad oddity in that UTF-16 decoding in 3.3 is much slower than
even in 2.7. Compared with 3.2 decoding is slower in 2-3 times. This is
a considerable regress. UTF-32 decoding is also slowed down by 1.5-2 times.
The fact that in some cases UTF-8 decoding also slowed, is not
surprising. I believe, that on a platform with a 64-bit long, there may
be other oddities.
How serious a problem this is for the Python 3.3 release? I could do the
optimization, if someone is not working on this already.