There's a Stackoverflow report I suspect is worth looking into, but
it requires far more RAM (over 80GB) than I have). The OP whittled it
down to a reasonably brief & straightforward pure Python 3 program.
It builds a ternary search tree, with perhaps a billion nodes. The
problem is that it "takes forever" for Python to reclaim the tree
storage when it goes out of scope (the author waited at least hours).
Alas, the OP said it takes about 45 minutes to build the tree, and the
problem goes away if the tree is materially smaller. So it takes a
long time just to try once. With a tree about 10x smaller, for me it
takes about 45 seconds for Python to reclaim the tree storage.
The tree is deep enough that the "trashcan" may be implicated, and the
node objects are small enough that obmalloc carves them out of a
(relatively) great many arenas. Those are two ways in which Python
_may_ be to blame. The sheer number of objects involved may be
provoking edge cases we normally never see.
But, for a start, it would be good to know if anyone else can actually
reproduce the problem.
I modified Makefile.pre.in to avoid installing wininst-*.exe files,
since these files are only useful on Windows: the distutils
bdist_wininst command only works on Windows.
I made the assumption that "make install" is only used on Unix, not on
Windows. I never tried to build Python in Cygwin or MinGW on Windows.
If someone knows these platforms, and consider that wininst-*.exe
should still be installed on these platforms, please propose a pull
request for bpo-37468.
By the way, bdist_wininst is now deprecated in Python 3.8: wheel
packages are now preferred.
Night gathers, and now my watch begins. It shall not end until my death.
2019-06-03 I have created PR https://github.com/python/cpython/pull/13772 ,
which adds IPv6 scoped addresses support to ipaddress module.
It is very critical to everyone, who is dealing with IPv6 networking. For
example, in salt project they use patched module.
So, it would be very nice to have my changes merged and avoid such a
Here is link to the bug - https://bugs.python.org/issue34788.
Please, pay attention to this issue.
When passing **kwargs to a callable, the expectation is that kwargs is a
dict with string keys. The interpreter enforces that it's a dict, but it
does NOT check the types of the keys. It's currently the job of the
called function to check that. In some cases, this check is not applied:
>>> from collections import OrderedDict
So this leads to the question: should the interpreter check the keys of
a **kwargs dict?
I don't have an answer myself, I'm just asking the question because it
comes up in https://github.com/python/cpython/pull/13930 and
the table with auditing events does not render on docs.python.org,
https://docs.python.org/3.9/library/audit_events.html. Steve and I are
going to present the auditing feature tomorrow at EuroPython. It would
be helpful to have the table available.
It works on Steve's and my local machine without any issues. I suspect
it's either an outdated sphinx version or caching issue. Could somebody
from the docs team or shell access to the docs machine please look into
(I don't know the best list to post this to, so if this is not it,
please forgive me and point me in the right direction. Thanks.)
So my inbox, and probably many of yours, was flooded this afternoon
with a dozen-plus emails from the Python-Announce list. I understand
that this list requires every email to be manually approved by a
moderator. The problem is that this is done infrequently. Prior to
today, the last round of approvals was June 26, almost two weeks ago.
This is not an atypical delay; on the contrary, it seems that
moderators only look at the queue once every one to two weeks.
There are several problems with these delays:
1. They result in floods of emails, with a large number of emails in a
short period of time. This makes inbox management difficult on the
days that approvals are done. Before you argue that "it's fine if you
have the right tools configured in the right way", consider that there
are probably many people who are subscribed to Python-Announce who
have no interest in and are not subscribed to any of the actual
discussion lists where such tools are most beneficial. Complex tool
configurations should not be a requirement for managing incoming
emails from what is essentially (to those people) a notification-only
mailing list. These people would be better served by frequent
approvals several times a week, allowing them to get fewer emails at
one time, but in a more timely manner.
2. Speaking of a more timely manner, the lengthy delays result in
redundant and outdated emails going through after the point where they
are no longer relevant. One such issue exemplified by today's set of
approvals (and seen on previous occasions before) is an announcement
of a new release of a PyPI package not being approved until after
there has already been a subsequent release to that same package. In
this case I am referring to the pytest 5.0.0 announcement sent to the
list on June 28 (according to the headers), followed by the pytest
5.0.1 announcement sent to the list on July 5. Neither was approved
and delivered to subscribers until today.
3. More importantly in terms of delays, on July 3 an announcement was
sent to the list regarding the impending switch of EuroPython ticket
rates to the late registration rate on July 6. This is an example of a
time-sensitive announcement that needs to not be delayed. Instead, the
email was not approved and delivered to subscribers until today, July
8, after the conference has already begun, and not in time for list
subscribers to react and avoid the late registration rates.
Is there a solution to this that would enable moderators to approve
more frequently? I understand that they are probably volunteers and
need to find spare time to wade through the queue, but if approvals
are done more frequently (even daily), then it will consume much less
time on each occasion. It would go from a task requiring an entire
hour (as it apparently did today based on the delivery timestamps) to
something that can be done on a coffee break.
After a few days of delay, but somewhat cutely timed with the US Independence Day, I present you Python 3.8.0b2:
This release is the second of four planned beta release previews. Beta release previews are intended to give the wider community the opportunity to test new features and bug fixes and to prepare their projects to support the new feature release. The next pre-release of Python 3.8 will be 3.8.0b3, currently scheduled for 2019-07-29.
Call to action
We strongly encourage maintainers of third-party Python projects to test with 3.8 during the beta phase and report issues found to the Python bug tracker <https://bugs.python.org/> as soon as possible. While the release is planned to be feature complete entering the beta phase, it is possible that features may be modified or, in rare cases, deleted up until the start of the release candidate phase (2019-09-30). Our goal is have no ABI changes after beta 3 and no code changes after 3.8.0rc1, the release candidate. To achieve that, it will be extremely important to get as much exposure for 3.8 as possible during the beta phase.
Please keep in mind that this is a preview release and its use is not recommended for production environments.
No more non-bugfixes allowed on the “3.8” branch
The time has come, team. Please help make Python 3.8 as stable as possible and keep all features not currently landed for Python 3.9. Don’t fret, it’ll come faster than you think.
Recently, we moved the optimization for the removal of dead code of the form
to the ast so we use JUMP bytecodes instead (being completed in PR14116).
The reason is that
currently, any syntax error in the block will never be reported. For
at module level do not raise any syntax error (just some examples), In
https://bugs.python.org/issue37500 it was reported
that after that, code coverage will decrease as coverage.py sees these new
bytecodes (even if they are not executed). In general,
the code object is a bit bigger and the optimization now it requires an
JUMP instruction to be executed, but syntax errors are reported.
The discussion on issue 37500 is about if we should prioritize the
optimization or the correctness of reporting syntax errors. In my opinion,
SyntaxErrors should be reported with independence of the value of variables
(__debug__) or constant as is a property of the code being written
not of the code being executed. Also, as CPython is the reference
implementation of Python, the danger here is that it could be interpreted
this optimization is part of the language and its behavior should be
mirrored in every other Python implementation. Elsewhere we have always
prioritize correctness over speed or optimizations.
I am writing this email to know what other people think. Should we revert
the change and not report Syntax Errors on optimized blocks? Someone
sees a viable way of reporting the errors and not emitting the bytecode for