I'd like to discuss the idea to add a module to parse TOML [toml-lang]
to Python's standard library.
PEP-0518 -- Specifying Minimum Build System Requirements for Python
Projects [pep] suggests to store build system dependencies in
`pyproject.toml`, yet Python itself does not support this format.
Various packaging related projects like pip and pipenv already support
PEP-0518 and vendored one of the existing TOML libraries in order to
read `pyproject.toml` files.
Besides that, TOML seems a better alternative to .cfg/.ini, .json --
both of which are already supported by Python's standard lib and
parsing/dumping TOML properly is tricky enough to solve it properly
There are a couple of TOML implementations out there [toml, pytoml,
tomlkit] and one would have to find out which one to prefer and migrate
into the stdlib.
If the result of this discussion is leaning towards adding TOML, I'd
volunteer to do it. This includes: coordinating with the maintainer of
the chosen library, writing the PEP (hopefully with some help) and
maintain the module for at least two years.
Dr. Bastian Venthur http://venthur.de
Debian Developer venthur at debian org
Today I discovered the this struct
const char* name;
unsigned int flags;
PyType_Slot *slots; /* terminated by slot==0. */
with "PyTypeSlot *slots" being on line 190 of object.h causes a problem when compiled with code that brings in Qt. Qt has macro definitions of slots.
With a cursory preprocessing of the file I was working with, using the handy gcc options -dM -E, I found that
slots was defined to nothing
and hence caused problems when object.h was brought into the mix.
I will try to make a simple reproducer tomorrow. I know this probably could be solved by header file inclusion re-ordering,
or in some cases #undef'ing slots before including Python.h, but I also thought the Python dev team would like to know
about this issue.
To implement a full version of PEP604
<https://www.python.org/dev/peps/pep-0604/>, I analyze the typing module,
started with _GenericAlias.
1) I must rewrite :
- def _type_check(arg, msg, is_argument=True)
- def _type_repr(obj)
- def _collect_type_vars(types)
- def _subs_tvars(tp, tvars, subs)
- def _check_generic(cls, parameters)
- def _remove_dups_flatten(parameters)
- def _tp_cache(func)
- class _Final
- class _Immutable
- class _SpecialForm(_Final, _Immutable, _root=True)
- class ForwardRef(_Final, _root=True)
- class TypeVar(_Final, _Immutable, _root=True)
- def _is_dunder(attr)
- class _GenericAlias(_Final, _root=True)
- class Generic
- class _TypingEmpty
- class _TypingEllipsis
- def _get_protocol_attrs(cls)
- def _is_callable_members_only(cls)
- def _allow_reckless_class_cheks()
- class _ProtocolMeta(ABCMeta)
- class Protocol(Generic, metaclass=_ProtocolMeta)
2) The function _tp_cache use functools.lru_cache()
cached = functools.lru_cache()(func)
it's not reasonable to move the lru_cache() in the core
3) The method TypeVar.__init__() use:
def_mod = sys._getframe(1).f_globals['__name__'] # for pickling
4) The method def _allow_reckless_class_cheks() use:
return sys._getframe(3).f_globals['__name__'] in ['abc', 'functools']
5) The method Protocol.__init_subclass___proto_hook() use:
if (isinstance(annotations, collections.abc.Mapping)
it's not reasonable to move the Mapping type in the core
It's not enough to move the typing classes, I must move
functools.lru_cache() and dependencies, collections.abs.Mapping and
dependencies, and track the frame level.
*It's too big for me.*
May be, the approach with only PEP 563 is enough.
from __future__ import annotations
This new syntax is only usable in annotations. Without runtime evaluation
and without modifying issubclass() and isinstance() may be acceptable. Only
the mypy (and others tools like this) must be updated.
Currently __trunc__ is used for two purposes:
* In math.trunc() which just calls __trunc__.
* In int() and PyNumber_Long() as a fallback if neither __int__ nor
__index__ are implemented.
Unlike to __int__ and __index__ there is not a slot corresponding to
__trunc__. So using it is slower than using __int__ or __index__. float
and Decimal implement __int__ and __trunc__ by the same code. Only
Fraction implements __trunc__ but not __int__.
I propose to deprecate the falling back to __trunc__ for converting to
int and remove it in future. Fraction should implement __int__.
We cannot use __trunc__ for setting the nb_int slot because it can
On behalf of the steering council I am happy to announce that as BDFL-Delegate I am accepting PEP 602 to move us to an annual release schedule (gated on a planned update; see below).
The steering council thinks that having a consistent schedule every year when we hit beta, RC, and final it will help the community:
* Know when to start testing the beta to provide feedback
* Known when the expect the RC so the community can prepare their projects for the final release
* Know when the final release will occur to coordinate their own releases (if necessary) when the final release of Python occurs
* Allow core developers to more easily plan their work to make sure work lands in the release they are targeting
* Make sure that core developers and the community have a shorter amount of time to wait for new features to be released
The acceptance is gated on Łukasz updating PEP 602 to reflect a planned shift in scheduling (he's been busy with a release of Black):
* 3 months for betas instead of 2
* 2 months for RCs instead of 1
This was discussed on https://discuss.python.org in order to give the community enough time to provide feedback in the betas while having enough time to thoroughly test the RC and to prep for the final release so the delay from Python's final release to any new project releases is minimal. It should also fit into the release schedule of Linux distributions like Fedora better than previously proposed so the distributions can test the RC when they start preparing for their own October releases. If this turns out to be a mistake after we try it out for Python 3.9 we can then discuss going back to longer betas and shorter RCs for the release after that. This will not change when feature development is cut off relative to PyCon US nor the core dev sprints happening just before the final release or the alpha of the next version.
To help people who cannot upgrade on an annual cycle, do note that:
* PEP 602 now says that deprecations will last two releases which is two years instead of the current 18 months
* Now that the stable ABI has been cleaned, extension modules should feel more comfortable targeting the stable ABI which should make supporting newer versions of Python much easier
As part of the shift to a 2 year deprecation time frame I will be restarting discussions around PEP 387 as BDFL-Delegate so we can have a more clear deprecation and backwards-compatibility policy as well for those that find an annual cycle too fast which will be updated to reflect this two year time frame (head's up, Benjamin 😉).
Thanks to Łukasz, Nick, and Steve for PEPs 602, 605, and 607 and everyone else who provided feedback on those PEPs!
In early November this year I'll be leading the first ever Python
'EnHackathon' at the Ensoft Cisco office - we anticipate having ~10
first-time contributors, all with experience writing C and/or Python (and
training in both). We each have up to 5 days (per year) to contribute, with
my intention being to focus on contributing to CPython. We will be blogging
about our experience (probably using GitHub pages) - I'll send out the URL
when it's been set up.
Having spoken to people about this at PyCon UK this year and emailed on the
core-mentorship mailing list, I'm posting here looking for any core devs
who would be happy to provide us with some guidance. I'm wary of PR reviews
being a blocker, and not wanting my co-contributors to feel disheartened by
issues they're working on not reaching a resolution.
We're open to working on almost any area of CPython, although particular
areas of interest/familiarity might be: CPython core, re, unittest,
subprocess, asyncio, ctypes, typing. There would be scope for us to work in
small teams to work on more substantial issues if that is seen as a useful
way to contribute, otherwise we can start with some of the easier issues on
Would anyone here be willing to offer some support to help us reach our
full potential? Please don't hesitate to contact me if you're interested in
any way, or if you have any advice.
If this year is a success there's a high probability we would look to do a
similar thing in future years (with the experience from this year already
in the bag)!
Due to awkward CDN caching, some users who downloaded the source code
tarballs of Python 3.5.8 got a preliminary version instead of the final
version. As best as we can tell, this only affects the .xz release;
there are no known instances of users downloading an incorrect version
of the .tgz file.
If you downloaded "Python-3.5.8.tar.xz" during the first twelve hours of
its release, you might be affected. It's easy to determine this for
yourself. The file size (15,382,140 bytes) and MD5 checksum
(4464517ed6044bca4fc78ea9ed086c36) published on the release page have
always matched the correct version. Also, the GPG signature file will
only report a "Good signature" for the correct .xz file (using "gpg
What's the difference between the two? The only difference is that the
final version also merges a fix for Python issue tracker #38243:
The fix adds a call to "html.escape" at a judicious spot, line 896 in
Lib/xmlrpc/server.py. The only other changes are one new test, to
ensure this new code is working, and an entry in the NEWS file. You can
see the complete list of changes here:
What should you do? It's up to you.
* If you and your users aren't using the XMLRPC library built in to
Python, you don't need to worry about which version of 3.5.8 you
* If you downloaded the .tgz tarball or the Git repo, you already have
the correct version.
* If you downloaded the xz file and want to make sure you have the
fix, check the MD5 sum, and if it's wrong download a fresh copy (and
make sure that one matches the known good MD5 sum!).
To smooth over this whole sordid mess, I plan to make a 3.5.9 release in
the next day or so. It'll be identical to the 3.5.8 release; its only
purpose is to ensure that all users have the same updated source code,
including the fix for #38243.
Sorry for the mess, everybody,
The csv module is probably heavily utilized by newcomers to Python, being a
very popular data exchange format.
Although, there are better tools for processing tabular data like SQLite,
or Pandas, I suspect this is still a very popular
There are many examples floating around how one can read and process CSV
with the csv module.
Quite a few tutorials show how to use namedtuple to gain memory saving and
speed, over the DictReader.
Python's own documentation has got a recipe in the collections modules
Hence, I was wondering why not go the extra step and add a new class to the
CSV module NamedTupleReader?
This class would do a good service for Python's users, especially newcomers
who are still not aware of
modules like the collections module.
Would someone be willing to sponsor and review such a PR from me?
As a smaller change, we could simply add a link from the CSV module's
documentation to the recipe in the collections module.
What do you think?
Imagine there's no countries
it isn't hard to do
Nothing to kill or die for
And no religion too
Imagine all the people
Living life in peace
On behalf of the Python development community, I'm relieved to announce
the availability of Python 3.5.8.
Python 3.5 is in "security fixes only" mode. This new version only
contains security fixes, not conventional bug fixes, and it is a
You can find Python 3.5.8 here:
Oh what fun,