It has been a while since I posted a copy of PEP 1 to the mailing
lists and newsgroups. I've recently done some updating of a few
sections, so in the interest of gaining wider community participation
in the Python development process, I'm posting the latest revision of
PEP 1 here. A version of the PEP is always available on-line at
-------------------- snip snip --------------------
Title: PEP Purpose and Guidelines
Version: $Revision: 1.36 $
Last-Modified: $Date: 2002/07/29 18:34:59 $
Author: Barry A. Warsaw, Jeremy Hylton
Post-History: 21-Mar-2001, 29-Jul-2002
What is a PEP?
PEP stands for Python Enhancement Proposal. A PEP is a design
document providing information to the Python community, or
describing a new feature for Python. The PEP should provide a
concise technical specification of the feature and a rationale for
We intend PEPs to be the primary mechanisms for proposing new
features, for collecting community input on an issue, and for
documenting the design decisions that have gone into Python. The
PEP author is responsible for building consensus within the
community and documenting dissenting opinions.
Because the PEPs are maintained as plain text files under CVS
control, their revision history is the historical record of the
Kinds of PEPs
There are two kinds of PEPs. A standards track PEP describes a
new feature or implementation for Python. An informational PEP
describes a Python design issue, or provides general guidelines or
information to the Python community, but does not propose a new
feature. Informational PEPs do not necessarily represent a Python
community consensus or recommendation, so users and implementors
are free to ignore informational PEPs or follow their advice.
PEP Work Flow
The PEP editor, Barry Warsaw <peps(a)python.org>, assigns numbers
for each PEP and changes its status.
The PEP process begins with a new idea for Python. It is highly
recommended that a single PEP contain a single key proposal or new
idea. The more focussed the PEP, the more successfully it tends
to be. The PEP editor reserves the right to reject PEP proposals
if they appear too unfocussed or too broad. If in doubt, split
your PEP into several well-focussed ones.
Each PEP must have a champion -- someone who writes the PEP using
the style and format described below, shepherds the discussions in
the appropriate forums, and attempts to build community consensus
around the idea. The PEP champion (a.k.a. Author) should first
attempt to ascertain whether the idea is PEP-able. Small
enhancements or patches often don't need a PEP and can be injected
into the Python development work flow with a patch submission to
the SourceForge patch manager or feature request tracker.
The PEP champion then emails the PEP editor <peps(a)python.org> with
a proposed title and a rough, but fleshed out, draft of the PEP.
This draft must be written in PEP style as described below.
If the PEP editor approves, he will assign the PEP a number, label
it as standards track or informational, give it status 'draft',
and create and check-in the initial draft of the PEP. The PEP
editor will not unreasonably deny a PEP. Reasons for denying PEP
status include duplication of effort, being technically unsound,
not providing proper motivation or addressing backwards
compatibility, or not in keeping with the Python philosophy. The
BDFL (Benevolent Dictator for Life, Guido van Rossum) can be
consulted during the approval phase, and is the final arbitrator
of the draft's PEP-ability.
If a pre-PEP is rejected, the author may elect to take the pre-PEP
to the comp.lang.python newsgroup (a.k.a. python-list(a)python.org
mailing list) to help flesh it out, gain feedback and consensus
from the community at large, and improve the PEP for
The author of the PEP is then responsible for posting the PEP to
the community forums, and marshaling community support for it. As
updates are necessary, the PEP author can check in new versions if
they have CVS commit permissions, or can email new PEP versions to
the PEP editor for committing.
Standards track PEPs consists of two parts, a design document and
a reference implementation. The PEP should be reviewed and
accepted before a reference implementation is begun, unless a
reference implementation will aid people in studying the PEP.
Standards Track PEPs must include an implementation - in the form
of code, patch, or URL to same - before it can be considered
PEP authors are responsible for collecting community feedback on a
PEP before submitting it for review. A PEP that has not been
discussed on python-list(a)python.org and/or python-dev(a)python.org
will not be accepted. However, wherever possible, long open-ended
discussions on public mailing lists should be avoided. Strategies
to keep the discussions efficient include, setting up a separate
SIG mailing list for the topic, having the PEP author accept
private comments in the early design phases, etc. PEP authors
should use their discretion here.
Once the authors have completed a PEP, they must inform the PEP
editor that it is ready for review. PEPs are reviewed by the BDFL
and his chosen consultants, who may accept or reject a PEP or send
it back to the author(s) for revision.
Once a PEP has been accepted, the reference implementation must be
completed. When the reference implementation is complete and
accepted by the BDFL, the status will be changed to `Final.'
A PEP can also be assigned status `Deferred.' The PEP author or
editor can assign the PEP this status when no progress is being
made on the PEP. Once a PEP is deferred, the PEP editor can
re-assign it to draft status.
A PEP can also be `Rejected'. Perhaps after all is said and done
it was not a good idea. It is still important to have a record of
PEPs can also be replaced by a different PEP, rendering the
original obsolete. This is intended for Informational PEPs, where
version 2 of an API can replace version 1.
PEP work flow is as follows:
Draft -> Accepted -> Final -> Replaced
Some informational PEPs may also have a status of `Active' if they
are never meant to be completed. E.g. PEP 1.
What belongs in a successful PEP?
Each PEP should have the following parts:
1. Preamble -- RFC822 style headers containing meta-data about the
PEP, including the PEP number, a short descriptive title
(limited to a maximum of 44 characters), the names, and
optionally the contact info for each author, etc.
2. Abstract -- a short (~200 word) description of the technical
issue being addressed.
3. Copyright/public domain -- Each PEP must either be explicitly
labelled as placed in the public domain (see this PEP as an
example) or licensed under the Open Publication License.
4. Specification -- The technical specification should describe
the syntax and semantics of any new language feature. The
specification should be detailed enough to allow competing,
interoperable implementations for any of the current Python
platforms (CPython, JPython, Python .NET).
5. Motivation -- The motivation is critical for PEPs that want to
change the Python language. It should clearly explain why the
existing language specification is inadequate to address the
problem that the PEP solves. PEP submissions without
sufficient motivation may be rejected outright.
6. Rationale -- The rationale fleshes out the specification by
describing what motivated the design and why particular design
decisions were made. It should describe alternate designs that
were considered and related work, e.g. how the feature is
supported in other languages.
The rationale should provide evidence of consensus within the
community and discuss important objections or concerns raised
7. Backwards Compatibility -- All PEPs that introduce backwards
incompatibilities must include a section describing these
incompatibilities and their severity. The PEP must explain how
the author proposes to deal with these incompatibilities. PEP
submissions without a sufficient backwards compatibility
treatise may be rejected outright.
8. Reference Implementation -- The reference implementation must
be completed before any PEP is given status 'Final,' but it
need not be completed before the PEP is accepted. It is better
to finish the specification and rationale first and reach
consensus on it before writing code.
The final implementation must include test code and
documentation appropriate for either the Python language
reference or the standard library reference.
PEPs are written in plain ASCII text, and should adhere to a
rigid style. There is a Python script that parses this style and
converts the plain text PEP to HTML for viewing on the web.
PEP 9 contains a boilerplate template you can use to get
started writing your PEP.
Each PEP must begin with an RFC822 style header preamble. The
headers must appear in the following order. Headers marked with
`*' are optional and are described below. All other headers are
PEP: <pep number>
Title: <pep title>
Version: <cvs version string>
Last-Modified: <cvs date string>
Author: <list of authors' real names and optionally, email addrs>
* Discussions-To: <email address>
Status: <Draft | Active | Accepted | Deferred | Final | Replaced>
Type: <Informational | Standards Track>
* Requires: <pep numbers>
Created: <date created on, in dd-mmm-yyyy format>
* Python-Version: <version number>
Post-History: <dates of postings to python-list and python-dev>
* Replaces: <pep number>
* Replaced-By: <pep number>
The Author: header lists the names and optionally, the email
addresses of all the authors/owners of the PEP. The format of the
author entry should be
address(a)dom.ain (Random J. User)
if the email address is included, and just
Random J. User
if the address is not given. If there are multiple authors, each
should be on a separate line following RFC 822 continuation line
conventions. Note that personal email addresses in PEPs will be
obscured as a defense against spam harvesters.
Standards track PEPs must have a Python-Version: header which
indicates the version of Python that the feature will be released
with. Informational PEPs do not need a Python-Version: header.
While a PEP is in private discussions (usually during the initial
Draft phase), a Discussions-To: header will indicate the mailing
list or URL where the PEP is being discussed. No Discussions-To:
header is necessary if the PEP is being discussed privately with
the author, or on the python-list or python-dev email mailing
lists. Note that email addresses in the Discussions-To: header
will not be obscured.
Created: records the date that the PEP was assigned a number,
while Post-History: is used to record the dates of when new
versions of the PEP are posted to python-list and/or python-dev.
Both headers should be in dd-mmm-yyyy format, e.g. 14-Aug-2001.
PEPs may have a Requires: header, indicating the PEP numbers that
this PEP depends on.
PEPs may also have a Replaced-By: header indicating that a PEP has
been rendered obsolete by a later document; the value is the
number of the PEP that replaces the current document. The newer
PEP must have a Replaces: header containing the number of the PEP
that it rendered obsolete.
PEP Formatting Requirements
PEP headings must begin in column zero and the initial letter of
each word must be capitalized as in book titles. Acronyms should
be in all capitals. The body of each section must be indented 4
spaces. Code samples inside body sections should be indented a
further 4 spaces, and other indentation can be used as required to
make the text readable. You must use two blank lines between the
last line of a section's body and the next section heading.
You must adhere to the Emacs convention of adding two spaces at
the end of every sentence. You should fill your paragraphs to
column 70, but under no circumstances should your lines extend
past column 79. If your code samples spill over column 79, you
should rewrite them.
Tab characters must never appear in the document at all. A PEP
should include the standard Emacs stanza included by example at
the bottom of this PEP.
A PEP must contain a Copyright section, and it is strongly
recommended to put the PEP in the public domain.
When referencing an external web page in the body of a PEP, you
should include the title of the page in the text, with a
footnote reference to the URL. Do not include the URL in the body
text of the PEP. E.g.
Refer to the Python Language web site  for more details.
When referring to another PEP, include the PEP number in the body
text, such as "PEP 1". The title may optionally appear. Add a
footnote reference that includes the PEP's title and author. It
may optionally include the explicit URL on a separate line, but
only in the References section. Note that the pep2html.py script
will calculate URLs automatically, e.g.:
Refer to PEP 1  for more information about PEP style
 PEP 1, PEP Purpose and Guidelines, Warsaw, Hylton
If you decide to provide an explicit URL for a PEP, please use
this as the URL template:
PEP numbers in URLs must be padded with zeros from the left, so as
to be exactly 4 characters wide, however PEP numbers in text are
Reporting PEP Bugs, or Submitting PEP Updates
How you report a bug, or submit a PEP update depends on several
factors, such as the maturity of the PEP, the preferences of the
PEP author, and the nature of your comments. For the early draft
stages of the PEP, it's probably best to send your comments and
changes directly to the PEP author. For more mature, or finished
PEPs you may want to submit corrections to the SourceForge bug
manager or better yet, the SourceForge patch manager so that
your changes don't get lost. If the PEP author is a SF developer,
assign the bug/patch to him, otherwise assign it to the PEP
When in doubt about where to send your changes, please check first
with the PEP author and/or PEP editor.
PEP authors who are also SF committers, can update the PEPs
themselves by using "cvs commit" to commit their changes.
Remember to also push the formatted PEP text out to the web by
doing the following:
% python pep2html.py -i NUM
where NUM is the number of the PEP you want to push out. See
% python pep2html.py --help
Transferring PEP Ownership
It occasionally becomes necessary to transfer ownership of PEPs to
a new champion. In general, we'd like to retain the original
author as a co-author of the transferred PEP, but that's really up
to the original author. A good reason to transfer ownership is
because the original author no longer has the time or interest in
updating it or following through with the PEP process, or has
fallen off the face of the 'net (i.e. is unreachable or not
responding to email). A bad reason to transfer ownership is
because you don't agree with the direction of the PEP. We try to
build consensus around a PEP, but if that's not possible, you can
always submit a competing PEP.
If you are interested assuming ownership of a PEP, send a message
asking to take over, addressed to both the original author and the
PEP editor <peps(a)python.org>. If the original author doesn't
respond to email in a timely manner, the PEP editor will make a
unilateral decision (it's not like such decisions can be
References and Footnotes
 This historical record is available by the normal CVS commands
for retrieving older revisions. For those without direct access
to the CVS tree, you can browse the current and past PEP revisions
via the SourceForge web site at
 The script referred to here is pep2html.py, which lives in
the same directory in the CVS tree as the PEPs themselves.
Try "pep2html.py --help" for details.
The URL for viewing PEPs on the web is
 PEP 9, Sample PEP Template
This document has been placed in the public domain.
Just so people aren't caught unawares, it is very unlikely that I will have
time to be the final editor on "What's New for 3.5" they way I was for 3.3 and
3.4. I've tried to encourage people to keep What's New up to date, but
*someone* should make a final editing pass. Ideally they'd do at least the
research Serhiy did last year on checking that there's a mention for all of the
versionadded and versionchanged 3.5's in the docs. Even better would be to
review the NEWS and/or commit history...but *that* is a really big job these
On 21 July 2015 at 19:40, Nick Coghlan <ncoghlan(a)gmail.com> wrote:
> All of this is why the chart that I believe should be worrying people
> is the topmost one on this page:
> Both the number of open issues and the number of open issues with
> patches are steadily trending upwards. That means the bottleneck in
> the current process *isn't* getting patches written in the first
> place, it's getting them up to the appropriate standards and applied.
> Yet the answer to the problem isn't a simple "recruit more core
> developers", as the existing core developers are *also* the bottleneck
> in the review and mentoring process for *new* core developers.
Those charts doesn't show patches in 'commit-review' -
There are only 45 of those patches.
AIUI - and I'm very new to core here - anyone in triagers can get
patches up to commit-review status.
I think we should set a goal to keep inventory low here - e.g. review
and either bounce back to patch review, or commit, in less than a
month. Now - a month isn't super low, but we have lots of stuff
greater than a month.
For my part, I'm going to pick up more or less one thing a day and
review it, but I think it would be great if other committers were to
also to do this: if we had 5 of us doing 1 a day, I think we'd burn
down this 45 patch backlog rapidly without significant individual
cost. At which point, we can fairly say to folk doing triage that
we're ready for patches :)
Robert Collins <rbtcollins(a)hp.com>
HP Converged Cloud
Another summer with another EuroPython, which means its time again to try to revive PEP 447…
I’ve just pushes a minor update to the PEP and would like to get some feedback on this, arguably fairly esoteric, PEP.
The PEP proposes to to replace direct access to the class __dict__ in object.__getattribute__ and super.__getattribute__ by calls to a new special method to give the metaclass more control over attribute lookup, especially for access using a super() object. This is needed for classes that cannot store (all) descriptors in the class dict for some reason, see the PEP text for a real-world example of that.
The PEP text (with an outdated section with benchmarks removed):
Title: Add __getdescriptor__ method to metaclass
Author: Ronald Oussoren <ronaldoussoren(a)mac.com>
Type: Standards Track
Post-History: 2-Jul-2013, 15-Jul-2013, 29-Jul-2013, 22-Jul-2015
Currently ``object.__getattribute__`` and ``super.__getattribute__`` peek
in the ``__dict__`` of classes on the MRO for a class when looking for
an attribute. This PEP adds an optional ``__getdescriptor__`` method to
a metaclass that replaces this behavior and gives more control over attribute
lookup, especially when using a `super`_ object.
That is, the MRO walking loop in ``_PyType_Lookup`` and
``super.__getattribute__`` gets changed from::
def lookup(mro_list, name):
for cls in mro_list:
if name in cls.__dict__:
def lookup(mro_list, name):
for cls in mro_list:
The default implemention of ``__getdescriptor__`` looks in the class
def __getdescriptor__(cls, name):
raise AttributeError(name) from None
It is currently not possible to influence how the `super class`_ looks
up attributes (that is, ``super.__getattribute__`` unconditionally
peeks in the class ``__dict__``), and that can be problematic for
dynamic classes that can grow new methods on demand.
The ``__getdescriptor__`` method makes it possible to dynamically add
attributes even when looking them up using the `super class`_.
The new method affects ``object.__getattribute__`` (and
`PyObject_GenericGetAttr`_) as well for consistency and to have a single
place to implement dynamic attribute resolution for classes.
The current behavior of ``super.__getattribute__`` causes problems for
classes that are dynamic proxies for other (non-Python) classes or types,
an example of which is `PyObjC`_. PyObjC creates a Python class for every
class in the Objective-C runtime, and looks up methods in the Objective-C
runtime when they are used. This works fine for normal access, but doesn't
work for access with `super`_ objects. Because of this PyObjC currently
includes a custom `super`_ that must be used with its classes, as well as
completely reimplementing `PyObject_GenericGetAttr`_ for normal attribute
The API in this PEP makes it possible to remove the custom `super`_ and
simplifies the implementation because the custom lookup behavior can be
added in a central location.
`PyObjC`_ cannot precalculate the contents of the class ``__dict__``
because Objective-C classes can grow new methods at runtime. Furthermore
Objective-C classes tend to contain a lot of methods while most Python
code will only use a small subset of them, this makes precalculating
The superclass attribute lookup hook
Both ``super.__getattribute__`` and ``object.__getattribute__`` (or
`PyObject_GenericGetAttr`_ and in particular ``_PyType_Lookup`` in C code)
walk an object's MRO and currently peek in the class' ``__dict__`` to look up
With this proposal both lookup methods no longer peek in the class ``__dict__``
but call the special method ``__getdescriptor__``, which is a slot defined
on the metaclass. The default implementation of that method looks
up the name the class ``__dict__``, which means that attribute lookup is
unchanged unless a metatype actually defines the new special method.
Aside: Attribute resolution algorithm in Python
The attribute resolution proces as implemented by ``object.__getattribute__``
(or PyObject_GenericGetAttr`` in CPython's implementation) is fairly
straightforward, but not entirely so without reading C code.
The current CPython implementation of object.__getattribute__ is basicly
equivalent to the following (pseudo-) Python code (excluding some house keeping and speed tricks)::
def _PyType_Lookup(tp, name):
mro = tp.mro()
assert isinstance(mro, tuple)
for base in mro:
assert isinstance(base, type)
# PEP 447 will change these lines:
def __getattribute__(self, name):
assert isinstance(name, str)
tp = type(self)
descr = _PyType_Lookup(tp, name)
f = None
if descr is not None:
f = descr.__get__
if f is not None and descr.__set__ is not None:
# Data descriptor
return f(descr, self, type(self))
dict = self.__dict__
if dict is not None:
if f is not None:
# Non-data descriptor
return f(descr, self, type(self))
if descr is not None:
# Regular class attribute
def __getattribute__(self, name):
assert isinstance(name, unicode)
if name != '__class__':
starttype = self.__self_type__
mro = startype.mro()
idx = mro.index(self.__thisclass__)
for base in mro[idx+1:]:
# PEP 447 will change these lines:
descr = base.__dict__[name]
f = descr.__get__
if f is not None:
None if (self.__self__ is self.__self_type__) else self.__self__,
return object.__getattribute__(self, name)
This PEP should change the dict lookup at the lines starting at "# PEP 447" with
a method call to perform the actual lookup, making is possible to affect that
lookup both for normal attribute access and access through the `super proxy`_.
Note that specific classes can already completely override the default
behaviour by implementing their own ``__getattribute__`` slot (with or without
calling the super class implementation).
In Python code
A meta type can define a method ``__getdescriptor__`` that is called during
attribute resolution by both ``super.__getattribute__``
def __getdescriptor__(cls, name):
raise AttributeError(name) from None
The ``__getdescriptor__`` method has as its arguments a class (which is an
instance of the meta type) and the name of the attribute that is looked up.
It should return the value of the attribute without invoking descriptors,
and should raise `AttributeError`_ when the name cannot be found.
The `type`_ class provides a default implementation for ``__getdescriptor__``,
that looks up the name in the class dictionary.
The code below implements a silly metaclass that redirects attribute lookup to
uppercase versions of names::
class UpperCaseAccess (type):
def __getdescriptor__(cls, name):
raise AttributeError(name) from None
class SillyObject (metaclass=UpperCaseAccess):
obj = SillyObject()
assert obj.m() == "fortytwo"
As mentioned earlier in this PEP a more realistic use case of this
functionallity is a ``__getdescriptor__`` method that dynamicly populates the
class ``__dict__`` based on attribute access, primarily when it is not
possible to reliably keep the class dict in sync with its source, for example
because the source used to populate ``__dict__`` is dynamic as well and does
not have triggers that can be used to detect changes to that source.
An example of that are the class bridges in PyObjC: the class bridge is a
Python object (class) that represents an Objective-C class and conceptually
has a Python method for every Objective-C method in the Objective-C class.
As with Python it is possible to add new methods to an Objective-C class, or
replace existing ones, and there are no callbacks that can be used to detect
In C code
A new slot ``tp_getdescriptor`` is added to the ``PyTypeObject`` struct, this
slot corresponds to the ``__getdescriptor__`` method on `type`_.
The slot has the following prototype::
PyObject* (*getdescriptorfunc)(PyTypeObject* cls, PyObject* name);
This method should lookup *name* in the namespace of *cls*, without looking at
superclasses, and should not invoke descriptors. The method returns ``NULL``
without setting an exception when the *name* cannot be found, and returns a
new reference otherwise (not a borrowed reference).
Use of this hook by the interpreter
The new method is required for metatypes and as such is defined on `type_`.
Both ``super.__getattribute__`` and
(through ``_PyType_Lookup``) use the this ``__getdescriptor__`` method when
walking the MRO.
Other changes to the implementation
The change for `PyObject_GenericGetAttr`_ will be done by changing the private
function ``_PyType_Lookup``. This currently returns a borrowed reference, but
must return a new reference when the ``__getdescriptor__`` method is present.
Because of this ``_PyType_Lookup`` will be renamed to ``_PyType_LookupName``,
this will cause compile-time errors for all out-of-tree users of this
The attribute lookup cache in ``Objects/typeobject.c`` is disabled for classes
that have a metaclass that overrides ``__getdescriptor__``, because using the
cache might not be valid for such classes.
Impact of this PEP on introspection
Use of the method introduced in this PEP can affect introspection of classes
with a metaclass that uses a custom ``__getdescriptor__`` method. This section
lists those changes.
The items listed below are only affected by custom ``__getdescriptor__``
methods, the default implementation for ``object`` won't cause problems
because that still only uses the class ``__dict__`` and won't cause visible
changes to the visible behaviour of the ``object.__getattribute__``.
* ``dir`` might not show all attributes
As with a custom ``__getattribute__`` method `dir()`_ might not see all
(instance) attributes when using the ``__getdescriptor__()`` method to
dynamicly resolve attributes.
The solution for that is quite simple: classes using ``__getdescriptor__``
should also implement `__dir__()`_ if they want full support for the builtin
* ``inspect.getattr_static`` might not show all attributes
The function ``inspect.getattr_static`` intentionally does not invoke
``__getattribute__`` and descriptors to avoid invoking user code during
introspection with this function. The ``__getdescriptor__`` method will also
be ignored and is another way in which the result of ``inspect.getattr_static``
can be different from that of ``builtin.getattr``.
* ``inspect.getmembers`` and ``inspect.get_class_attrs``
Both of these functions directly access the class __dict__ of classes along
the MRO, and hence can be affected by a custom ``__getdescriptor__`` method.
**TODO**: I haven't fully worked out what the impact of this is, and if there
are mitigations for those using either updates to these functions, or
additional methods that users should implement to be fully compatible with
One possible mitigation is to have a custom ``__getattribute__`` for these
classes that fills ``__dict__`` before returning and and defers to the
default implementation for other attributes.
* Direct introspection of the class ``__dict__``
Any code that directly access the class ``__dict__`` for introspection
can be affected by a custom ``__getdescriptor__`` method.
**WARNING**: The benchmark results in this section are old, and will be updated
when I've ported the patch to the current trunk. I don't expect significant
changes to the results in this section.
An earlier version of this PEP used the following static method on classes::
def __getattribute_super__(cls, name, object, owner): pass
This method performed name lookup as well as invoking descriptors and was
necessarily limited to working only with ``super.__getattribute__``.
It would be nice to avoid adding a new slot, thus keeping the API simpler and
easier to understand. A comment on `Issue 18181`_ asked about reusing the
``tp_getattro`` slot, that is super could call the ``tp_getattro`` slot of all
methods along the MRO.
That won't work because ``tp_getattro`` will look in the instance
``__dict__`` before it tries to resolve attributes using classes in the MRO.
This would mean that using ``tp_getattro`` instead of peeking the class
dictionaries changes the semantics of the `super class`_.
Alternate placement of the new method
This PEP proposes to add ``__getdescriptor__`` as a method on the metaclass.
An alternative would be to add it as a class method on the class itself
(simular to how ``__new__`` is a `staticmethod`_ of the class and not a method
of the metaclass).
The two are functionally equivalent, and there's something to be said about
not requiring the use of a meta class.
* `Issue 18181`_ contains an out of date prototype implementation
This document has been placed in the public domain.
.. _`Issue 18181`: http://bugs.python.org/issue18181
.. _`super class`: http://docs.python.org/3/library/functions.html#super
.. _`super proxy`: http://docs.python.org/3/library/functions.html#super
.. _`super`: http://docs.python.org/3/library/functions.html#super
.. _`dir()`: http://docs.python.org/3/library/functions.html#dir
.. _`staticmethod`: http://docs.python.org/3/library/functions.html#staticmethod
.. _`__dir__()`: https://docs.python.org/3/reference/datamodel.html#object.__dir__
.. _`NotImplemented`: http://docs.python.org/3/library/constants.html#NotImplemented
.. _`PyObject_GenericGetAttr`: http://docs.python.org/3/c-api/object.html#PyObject_GenericGetAttr
.. _`type`: http://docs.python.org/3/library/functions.html#type
.. _`AttributeError`: http://docs.python.org/3/library/exceptions.html#AttributeError
.. _`PyObjC`: http://pyobjc.sourceforge.net/
.. _`classmethod`: http://docs.python.org/3/library/functions.html#classmethod
There are over 400 issues on the bug tracker that have not had a
response to the initial message, roughly half of these within the last
eight months alone. Is there a (relatively) simple way that we can
share these out between us to sort those that are likely to need dealing
with in the medium to longer term, from the simple short term ones, e.g
very easy typo fixes?
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.
On 15-04-15, Akira Li <4kir4.1i(a)gmail.com> wrote:
> Isaac Schwabacher <ischwabacher(a)wisc.edu> writes:
> > On 15-04-15, Akira Li <4kir4.1i(a)gmail.com> wrote:
> >> Isaac Schwabacher <ischwabacher(a)wisc.edu> writes:
> >> > ...
> >> >
> >> > I know that you can do datetime.now(tz), and you can do datetime(2013,
> >> > 11, 3, 1, 30, tzinfo=zoneinfo('America/Chicago')), but not being able
> >> > to add a time zone to an existing naive datetime is painful (and
> >> > strptime doesn't even let you pass in a time zone).
> >> `.now(tz)` is correct. `datetime(..., tzinfo=tz)`) is wrong: if tz is a
> >> pytz timezone then you may get a wrong tzinfo (LMT), you should use
> >> `tz.localize(naive_dt, is_dst=False|True|None)` instead.
> > The whole point of this thread is to finalize PEP 431, which fixes the
> > problem for which `localize()` and `normalize()` are workarounds. When
> > this is done, `datetime(..., tzinfo=tz)` will be correct.
> > ijs
> The input time is ambiguous. Even if we assume PEP 431 is implemented in
> some form, your code is still missing isdst parameter (or the
> analog). PEP 431 won't fix it; it can't resolve the ambiguity by
> itself. Notice is_dst paramter in the `tz.localize()` call (current
...yeah, I forgot to throw that in there. It was supposed to be there all along. Nothing to see here, move along.
> .now(tz) works even during end-of-DST transitions (current API) when the
> local time is ambiguous.
I know that. That's what I was complaining about-- I was trying to talk about how astimezone() was going to be inadequate even after the PEP was implemented because it couldn't turn naive datetimes into aware ones, and people were giving examples that started with aware datetimes generated by now(tz), which completely went around the point I was trying to make. But it looks like astimezone() is going to grow an is_dst parameter, and everything will be OK.
On 20 July 2015 at 22:34, Ben Finney <ben+python(a)benfinney.id.au> wrote:
> Paul Moore <p.f.moore(a)gmail.com> writes:
>> Again, I'm sorry to pick on one sentence out of context, but it cut
>> straight to my biggest fear when doing a commit (on any project) -
>> what if, after all the worrying and consideration I put into doing
>> this commit, people disagree with me (or worse still, I made a
>> mistake)? Will I be able to justify what I decided?
> That seems quite healthy to me. On a collaborative project with effects
> far beyond oneself, yes, any change *should* be able to be justified
> when challenged.
No, that's not how this works: if folks are thinking that being a
Python user, or even a CPython core developer, means that we're
entitled to micromanage core developers by demanding extensive
explanations for any arbitrary commit we choose, they're thoroughly
mistaken. Only Guido has that privilege, and one of the reasons he's
as respected as he is is his willingness to trust the experience and
expertise of others and only rarely exercise his absolute authority.
Folks are granted core committer privileges because we *trust their
instincts*. We trust them to know when they're acting within the
limits of their own expertise and experience, and we trust them to
know when it would be beneficial to seek feedback from a wider
audience before making up their minds.
There are *many* cases where we *don't know* up front what the right
answer is, so we'll actively seek consultation, whether that's through
a review request on the issue tracker, a focused python-dev
discussion, a more speculative discussion on python-ideas, or engaging
in the full Python Enhancement Proposal process.
There are also cases where we'll decide "it seems plausible that this
might be a good idea, so let's try it out and see what happens in
practice rather than continuing to speculate" - only ever doing things
that you're already 100% certain are a good idea is a recipe for
stagnation and decline (hence the "Now is better than never" line in
the Zen). A feedback cycle of a few years is a relatively short time
in programming language development, so if we discover with the
benefit of hindsight that something that seemed plausible really was
in fact a genuinely bad idea, that's why we have a deprecation process
(as well as the abilty to mark new APIs as provisional). If we want a
faster feedback cycle measured in days or weeks or months rather than
years, then we'll find a way to make the idea available for
experimentation via PyPI or at least as a cookbook recipe or in a
public source repo.
But the authority and responsibility to make changes, to decide what
constitutes a reasonable risk, to decide which updates are appropriate
to send out to tens of millions of Python users worldwide remains
Some of those decisions will be akin to deciding to paint a bikeshed
blue instead of yellow (or green, or red, or chartreuse, or ...).
Others will be about mitigating the observed negative consequences of
a previous design decision while retaining the positive aspects.
Those kinds of design decision are still more art than science -
that's why our solution to them is to attempt to recruit people with
the interest, ability and time needed to make them well, and then
largely leave them to it. Explaining design decisions after the fact
isn't about *defending* those decisions, it's about attempting to
convey useful design knowledge, in order to help build new core
contributors (and also because explaining something to someone else is
a good way to understand it better yourself).
All of this is why the chart that I believe should be worrying people
is the topmost one on this page:
Both the number of open issues and the number of open issues with
patches are steadily trending upwards. That means the bottleneck in
the current process *isn't* getting patches written in the first
place, it's getting them up to the appropriate standards and applied.
Yet the answer to the problem isn't a simple "recruit more core
developers", as the existing core developers are *also* the bottleneck
in the review and mentoring process for *new* core developers.
In my view, those stats are a useful tool we can use to ask ourselves
"Am I actually helping with this contribution, or at the very least,
not causing harm?":
* participating in the core-mentorship list, and actively looking for
ways to improve the effectiveness of the mentorship program (that
don't just queue up more work behind the existing core developer
bottleneck) should help make those stats better
* sponsoring or volunteering to help with the upstream projects our
core workflow tools are based on (Roundup, Buildbot) or with updating
our specific installations of them should help make those stats better
(and also offers faster iteration cycles than core development itself)
* helping core developers that have time to work on "CPython in
general" rather than specific projects of interest to them to focus
their attention more effectively may help make those stats better (and
it would be even better if we could credit such triaging efforts
* exploring ways to extract contribution metrics from Roundup so we
can have a more reliable detection mechanism for active and consistent
contributors than the "Mark 1 core developer memory, aka the
notoriously unreliable human brain" may help make those stats better
* helping out the upstream projects for one or both of the
forge.python.org proposals (Kallithea, Phabricator) that are designed
(at least in part) to remove core developers from the critical path
for management of the support repos like the developer guide and
Python Enhancement Proposals may help make those stats better
* getting more core contributors into a position where at least some
of their work time can be spent on facilitating upstream contributions
may help make those stats better (shifting at least some core
contribution activity to paid roles is also one of the keys to
countering the "free time for volunteer activities is inequitably
* alienating veteran core developers and encouraging them to spend
their time elsewhere will contribute to making those stats worse
* discouraging newly minted core developers from exercising their
granted authority will contribute to making those stats worse
* discouraging veteran core developers from recruiting new core
developers by contributing to creating a hostile environment on the
core mailing lists will *definitely* contribute to making those stats
Make no mistake, sustainable open source development is a *genuinely
hard problem*. We're in a much better position than most projects
(being one of the top 5 programming languages in the world has its
benefits), but we're still effectively running a giant multinational
collaborative software development project with close to zero formal
management support. While their are good examples we can (and are)
learning from, improving the support structures for an already wildly
successful open source project without alienating existing
contributors isn't a task that comes with an instruction manual :)
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
This is the first message from Intel's language optimization team.
We would like to provide the Python internals developer community
with a daily service which will monitor latest committed patches
performance regressions against well known workloads.
Our aim is to run a multitude of workloads as well as real-life scenarios
which the community considers relevant. The service will send daily bulletins
containing latest measurements for daily variations and variations against
latest stable release run on our Intel-enabled servers.
The community's feedback is very important for us. For any questions,
comments or suggestions you can also contact us on our mailing list
lp(a)lists.01.org. You can also check our website: https://www.01.org/lp
Results for project python_default-nightly, build date 2015-07-24 09:02:02
revision date: 2015-07-24 07:43:44
cpu: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz 2x18 cores, stepping 2, LLC 45 MB
mem: 128 GB
os: CentOS 7.1
kernel: Linux 3.10.0-229.4.2.el7.x86_64
Note: Baseline results were generated using release v3.4.3, with hash
b4cbecbc0781e89a309d03b60a1f75f8499250e6 from 2015-02-25 12:15:33+00:00
benchmark unit change since change since
last run v3.4.3
:-) django_v2 sec 1.12735% 7.47953%
:-( pybench sec -0.53822% -2.40216%
:-( regex_v8 sec 0.61774% -2.32010%
:-| nbody sec 1.75860% -0.76206%
:-) json_dump_v2 sec 2.13422% -0.56930%
Our lab does a nightly source pull and build of the Python project and measures
performance changes against the previous stable version and the previous nightly
measurement. This is provided as a service to the community so that quality
issues with current hardware can be identified quickly.
Intel technologies' features and benefits depend on system configuration and may
require enabled hardware, software or service activation. Performance varies
depending on system configuration. No license (express or implied, by estoppel
or otherwise) to any intellectual property rights is granted by this document.
Intel disclaims all express and implied warranties, including without
limitation, the implied warranties of merchantability, fitness for a particular
purpose, and non-infringement, as well as any warranty arising from course of
performance, course of dealing, or usage in trade. This document may contain
information on products, services and/or processes in development. Contact your
Intel representative to obtain the latest forecast, schedule, specifications and
roadmaps. The products and services described may contain defects or errors
known as errata which may cause deviations from published specifications.
Current characterized errata are available on request.
(C) 2015 Intel Corporation.
In the future feel free to redirect date- and time-related discussions to
the SIG since datetime stuff requires such a specific domain knowledge that
most of us have.
I am also looking for people to be admins of the list. If you're willing to
admin the list please let me know and I will add you.