I see more and more proposed changes to fix some parts of the code to
"partially" support a platform.
I remember that 5 years ago, the CPython code was "full" of #ifdef and
other conditional code to support various platforms, and I was happy
when we succeeded to remove support for all these old platforms like
OS/2, DOS or VMS.
The PEP 11 has a nice description to get a *full* support of a new platform:
But the question here is more about "partial" support.
While changes are usually short, I dislike applying them to Python 2.7
and/or Python 3.6, until a platform is fully support. I prefer to
first see a platform fully supported to see how much changes are
required and to make sure that we get someone involved to maintain the
code (handle new issues).
Example of platforms: MinGW, Cygwin, OpenBSD, NetBSD, xWorks RTOS, etc.
But the way, is there an exhaustive list of platforms "officially"
supported by CPython?
While discussions on the typing module are still hot, what do you
think of allowing annotations in the standard libraries, but limited
to a few basic types:
* bool, int, float, complex
* bytes, bytearray
I'm not sure about container types like tuple, list, dict, set,
frozenset. If we allow them, some developers may want to describe the
container content, like List[int] or Dict[int, str].
My intent is to enhance the builtin documentation of functions of the
standard library including functions implemented in C. For example,
it's well known that id(obj) returns an integer. So I expect a
id(obj) -> int
Context: Tal Einat proposed a change to convert the select module to
The docstring currently documents the return like:
haypo@selma$ pydoc3 select.epoll.fileno|cat
Help on method_descriptor in select.epoll:
select.epoll.fileno = fileno(...)
fileno() -> int
Return the epoll control file descriptor.
I'm talking about "-> int", nice way to document that the function
returns an integer.
Problem: even if Argument Clinic supports "return converters" like
"int", it doesn't generate a docstring with the return type. So I
created the issue:
"Support return annotation in signature for Argument Clinic"
But now I am confused between docstrings, Argument Clinic, "return
converters", "signature" and "annotations"...
R. David Murray reminded me the current policy:
"the python standard library will not include type annotations, that
those are provided by typeshed."
While we are discussing removing (or not) typing from the stdlib (!),
I propose to allow annotations in the stdlib, *but* only limited to
the most basic types.
Such annotations *shouldn't* have a significant impact on performances
(startup time) or memory footprint.
The expected drawback is that users can be surprised that some
functions get annotations, while others don't. For example,
os.fspath() requires a complex annotation which needs the typing
module and is currently done in typeshed, whereas id(obj) can get its
return type documented ("-> int").
What do you think?
On Nov 5, 2017 2:41 PM, "Paul Ganssle" <pganssle(a)gmail.com> wrote:
I think the question of whether any specific implementation of dict could
be made faster for a given architecture or even that the trade-offs made by
CPython are generally the right ones is kinda beside the point. It's
certainly feasible that an implementation that does not preserve ordering
could be better for some implementation of Python, and the question is
really how much is gained by changing the language semantics in such a way
as to cut off that possibility.
The language definition is not nothing, but I think it's easy to
overestimate its importance. CPython does in practice provide ordering
guarantees for dicts, and this solves a whole bunch of pain points: it
makes json roundtripping work better, it gives ordered kwargs, it makes it
possible for metaclasses to see the order class items were defined, etc.
And we got all these goodies for better-than-free: the new dict is faster
and uses less memory. So it seems very unlikely that CPython is going to
revert this change in the foreseeable future, and that means people will
write code that depends on this, and that means in practice reverting it
will become impossible due to backcompat and it will be important for other
interpreters to implement, regardless of what the language definition says.
That said, there are real benefits to putting this in the spec. Given that
we're not going to get rid of it, we might as well reward the minority of
programmers who are conscientious about following the spec by letting them
use it too. And there were multiple PEPs that went away when this was
merged; no one wants to resurrect them just for hypothetical future
implementations that may never exist. And putting it in the spec will mean
that we can stop having this argument over and over with the same points
rehashed for those who missed the last one. (This isn't aimed at you or
anything; it's not your fault you don't know all these arguments off the
top of your head, because how would you? But it is a reality of mailing
list dynamics that rehashing this kind of thing sucks up energy without
MicroPython deviates from the language spec in lots of ways. Hopefully this
won't need to be another one, but it won't be the end of the world if it is.
Currently the implementation of re and curses related modules is sparsed
over several files:
I want to make the re module a package, and move sre_*.py files into it.
Maybe later I'll add the sre_optimize.py file for separating
optimization from parsing and compiling to an internal code. The
original sre_*.py files will be left for compatibility for long time,
but they will just import their content from the re package.
_sre implementation will be moved into the Modules/_sre/ directory. This
will just make them to be in one place and will decrease the number of
files in the Modules/ directory.
The implementations of the _curses and _curses_panel modules together
with the common header file will be moved into the Modules/_curses/
directory. Excluding py_curses.h from the set of global headers will
increase the speed of rebuilding when modify just the _curses
implementation (I did this too much recent times). In future the
implementation of menu and forms extensions will be added (the patch for
menu has beed provided years ago). Since _cursesmodule.c is one of the
largest file (it defines hundreds of functions), it may be worth to
extract the implementation of the _curses.window class into a separate
file. And I want to implement the support of "soft function-key labels".
All this will increase the number of _curses related files to 7.
curses already is a package.
Since virtually all changes in these files at recent years have been
made by me, I don't think this will harm other core developers. Are
there any objections?
I rejected my own PEP 511 "API for code transformers" that I wrote in
This PEP was rejected by its author.
This PEP was seen as blessing new Python-like programming languages
which are close but incompatible with the regular Python language. It
was decided to not promote syntaxes incompatible with Python.
This PEP was also seen as a nice tool to experiment new Python features,
but it is already possible to experiment them without the PEP, only with
importlib hooks. If a feature becomes useful, it should be directly part
of Python, instead of depending on an third party Python module.
Finally, this PEP was driven was the FAT Python optimization project
which was abandonned in 2016, since it was not possible to show any
significant speedup, but also because of the lack of time to implement
the most advanced and complex optimizations.
While discussions on this PEP are not over on python-ideas, I proposed
this PEP directly on python-dev since I consider that my PEP already
summarizes current and past proposed alternatives.
* Add time.time_ns(): system clock with nanosecond resolution
* Why not picoseconds?
The PEP 564 will be shortly online at:
Title: Add new time functions with nanosecond resolution
Author: Victor Stinner <victor.stinner(a)gmail.com>
Type: Standards Track
Add five new functions to the ``time`` module: ``time_ns()``,
``perf_counter_ns()``, ``monotonic_ns()``, ``clock_gettime_ns()`` and
``clock_settime_ns()``. They are similar to the function without the
``_ns`` suffix, but have nanosecond resolution: use a number of
nanoseconds as a Python int.
The best ``time.time_ns()`` resolution measured in Python is 3 times
better then ``time.time()`` resolution on Linux and Windows.
Float type limited to 104 days
The clocks resolution of desktop and latop computers is getting closer
to nanosecond resolution. More and more clocks have a frequency in MHz,
up to GHz for the CPU TSC clock.
The Python ``time.time()`` function returns the current time as a
floatting point number which is usually a 64-bit binary floatting number
(in the IEEE 754 format).
The problem is that the float type starts to lose nanoseconds after 104
days. Conversion from nanoseconds (``int``) to seconds (``float``) and
then back to nanoseconds (``int``) to check if conversions lose
# no precision loss
>>> x = 2 ** 52 + 1; int(float(x * 1e-9) * 1e9) - x
# precision loss! (1 nanosecond)
>>> x = 2 ** 53 + 1; int(float(x * 1e-9) * 1e9) - x
>>> print(datetime.timedelta(seconds=2 ** 53 / 1e9))
104 days, 5:59:59.254741
``time.time()`` returns seconds elapsed since the UNIX epoch: January
1st, 1970. This function loses precision since May 1970 (47 years ago)::
>>> import datetime
>>> unix_epoch = datetime.datetime(1970, 1, 1)
>>> print(unix_epoch + datetime.timedelta(seconds=2**53 / 1e9))
Previous rejected PEP
Five years ago, the PEP 410 proposed a large and complex change in all
Python functions returning time to support nanosecond resolution using
the ``decimal.Decimal`` type.
The PEP was rejected for different reasons:
* The idea of adding a new optional parameter to change the result type
was rejected. It's an uncommon (and bad?) programming practice in
* It was not clear if hardware clocks really had a resolution of 1
nanosecond, especially at the Python level.
* The ``decimal.Decimal`` type is uncommon in Python and so requires
to adapt code to handle it.
CPython enhancements of the last 5 years
Since the PEP 410 was rejected:
* The ``os.stat_result`` structure got 3 new fields for timestamps as
nanoseconds (Python ``int``): ``st_atime_ns``, ``st_ctime_ns``
* The PEP 418 was accepted, Python 3.3 got 3 new clocks:
``time.monotonic()``, ``time.perf_counter()`` and
* The CPython private "pytime" C API handling time now uses a new
``_PyTime_t`` type: simple 64-bit signed integer (C ``int64_t``).
The ``_PyTime_t`` unit is an implementation detail and not part of the
API. The unit is currently ``1 nanosecond``.
Existing Python APIs using nanoseconds as int
The ``os.stat_result`` structure has 3 fields for timestamps as
nanoseconds (``int``): ``st_atime_ns``, ``st_ctime_ns`` and
The ``ns`` parameter of the ``os.utime()`` function accepts a
``(atime_ns: int, mtime_ns: int)`` tuple: nanoseconds.
This PEP adds five new functions to the ``time`` module:
* ``time.clock_settime_ns(clock_id, time: int)``
These functions are similar to the version without the ``_ns`` suffix,
but use nanoseconds as Python ``int``.
For example, ``time.monotonic_ns() == int(time.monotonic() * 1e9)`` if
``monotonic()`` value is small enough to not lose precision.
This PEP only proposed to add new functions getting or setting clocks
with nanosecond resolution. Clocks are likely to lose precision,
especially when their reference is the UNIX epoch.
Python has other functions handling time (get time, timeout, etc.), but
no nanosecond variant is proposed for them since they are less likely to
Example of unchanged functions:
* ``os`` module: ``sched_rr_get_interval()``, ``times()``, ``wait3()``
* ``resource`` module: ``ru_utime`` and ``ru_stime`` fields of
* ``signal`` module: ``getitimer()``, ``setitimer()``
* ``time`` module: ``clock_getres()``
Since the ``time.clock()`` function was deprecated in Python 3.3, no
``time.clock_ns()`` is added.
Alternatives and discussion
``time.time_ns()`` API is not "future-proof": if clocks resolutions
increase, new Python functions may be needed.
In practive, the resolution of 1 nanosecond is currently enough for all
structures used by all operating systems functions.
Hardware clock with a resolution better than 1 nanosecond already
exists. For example, the frequency of a CPU TSC clock is the CPU base
frequency: the resolution is around 0.3 ns for a CPU running at 3
GHz. Users who have access to such hardware and really need
sub-nanosecond resolution can easyly extend Python for their needs.
Such rare use case don't justify to design the Python standard library
to support sub-nanosecond resolution.
For the CPython implementation, nanosecond resolution is convenient: the
standard and well supported ``int64_t`` type can be used to store time.
It supports a time delta between -292 years and 292 years. Using the
UNIX epoch as reference, this type supports time since year 1677 to year
>>> 1970 - 2 ** 63 / (10 ** 9 * 3600 * 24 * 365.25)
>>> 1970 + 2 ** 63 / (10 ** 9 * 3600 * 24 * 365.25)
It was proposed to modify ``time.time()`` to use float type with better
precision. The PEP 410 proposed to use ``decimal.Decimal``, but it was
rejected. Apart ``decimal.Decimal``, no portable ``float`` type with
better precision is currently available in Python. Changing the builtin
Python ``float`` type is out of the scope of this PEP.
Other ideas of new types were proposed to support larger or arbitrary
precision: fractions, structures or 2-tuple using integers,
fixed-precision floating point number, etc.
See also the PEP 410 for a previous long discussion on other types.
Adding a new type requires more effort to support it, than reusing
``int``. The standard library, third party code and applications would
have to be modified to support it.
The Python ``int`` type is well known, well supported, ease to
manipulate, and supports all arithmetic operations like:
``dt = t2 - t1``.
Moreover, using nanoseconds as integer is not new in Python, it's
already used for ``os.stat_result`` and
If the Python ``float`` type becomes larger (ex: decimal128 or
float128), the ``time.time()`` precision will increase as well.
The ``time.time(ns=False)`` API was proposed to avoid adding new
functions. It's an uncommon (and bad?) programming practice in Python to
change the result type depending on a parameter.
Different options were proposed to allow the user to choose the time
resolution. If each Python module uses a different resolution, it can
become difficult to handle different resolutions, instead of just
seconds (``time.time()`` returning ``float``) and nanoseconds
(``time.time_ns()`` returning ``int``). Moreover, as written above,
there is no need for resolution better than 1 nanosecond in practive in
the Python standard library.
Annex: Clocks Resolution in Python
Script ot measure the smallest difference between two ``time.time()`` and
``time.time_ns()`` reads ignoring differences of zero::
LOOPS = 10 ** 6
print("time.time_ns(): %s" % time.time_ns())
print("time.time(): %s" % time.time())
min_dt = [abs(time.time_ns() - time.time_ns())
for _ in range(LOOPS)]
min_dt = min(filter(bool, min_dt))
print("min time_ns() delta: %s ns" % min_dt)
min_dt = [abs(time.time() - time.time())
for _ in range(LOOPS)]
min_dt = min(filter(bool, min_dt))
print("min time() delta: %s ns" % math.ceil(min_dt * 1e9))
Results of time(), perf_counter() and monotonic().
Linux (kernel 4.12 on Fedora 26):
* time_ns(): **84 ns**
* time(): **239 ns**
* perf_counter_ns(): 84 ns
* perf_counter(): 82 ns
* monotonic_ns(): 84 ns
* monotonic(): 81 ns
* time_ns(): **318000 ns**
* time(): **894070 ns**
* perf_counter_ns(): 100 ns
* perf_counter(): 100 ns
* monotonic_ns(): 15000000 ns
* monotonic(): 15000000 ns
The difference on ``time.time()`` is significant: **84 ns (2.8x better)
vs 239 ns on Linux and 318 us (2.8x better) vs 894 us on Windows**. The
difference (presion loss) will be larger next years since every day adds
864,00,000,000,000 nanoseconds to the system clock.
The difference on ``time.perf_counter()`` and ``time.monotonic clock()``
is not visible in this quick script since the script runs less than 1
minute, and the uptime of the computer used to run the script was
smaller than 1 week. A significant difference should be seen with an
uptime of 104 days or greater.
Internally, Python starts ``monotonic()`` and ``perf_counter()``
clocks at zero on some platforms which indirectly reduce the
This document has been placed in the public domain.