In Python 2.5 `0or` was accepted by the Python parser. It became an
error in 2.6 because "0o" became recognizing as an incomplete octal
number. `1or` still is accepted.
On other hand, `1if 2else 3` is accepted despites the fact that "2e" can
be recognized as an incomplete floating point number. In this case the
tokenizer pushes "e" back and returns "2".
Shouldn't it do the same with "0o"? It is possible to make `0or` be
parseable again. Python implementation is able to tokenize this example:
$ echo '0or' | ./python -m tokenize
1,0-1,1: NUMBER '0'
1,1-1,3: NAME 'or'
1,3-1,4: OP '['
1,4-1,5: OP ']'
1,5-1,6: NEWLINE '\n'
2,0-2,0: ENDMARKER ''
On other hand, all these examples look weird. There is an assymmetry:
`1or 2` is a valid syntax, but `1 or2` is not. It is hard to recognize
visually the boundary between a number and the following identifier or
keyword, especially if numbers can contain letters ("b", "e", "j", "o",
"x") and underscores, and identifiers can contain digits. On both sides
of the boundary can be letters, digits, and underscores.
I propose to change the Python syntax by adding a requirement that there
should be a whitespace or delimiter between a numeric literal and the
webmaster has already heard from 4 people who cannot install it.
I sent them to the bug tracker or to python-list but they seem
not to have gone either place. Is there some guide I should be
sending them to, 'how to debug installation problems'?
If one goes to httWhps://www.python.org/downloads
<https://www.python.org/downloads> from a Windows browser, the default
download URL is for the 32-bit installer instead of the 64-bit one.
I wonder why is this still the case?
Shouldn't we encourage new Windows users (who may not even know the
distinction between the two architectures) to use the 64-bit version of
Python, since most likely they can?
If this is not the correct forum for this, please let me know where I can
direct my question/feature request, thanks.
With the revised PEP 1 published, the Steering Council members have
been working through the backlog of open PEPs, figuring out which ones
are at a stage of maturity where we think it makes sense to appoint a
BDFL-Delegate to continue moving the PEP through the review process,
and eventually make the final decision on whether or not to accept or
reject the change.
We'll be announcing those appointments as we go, so I'm happy to
report that I will be handling the BDFL-Delegate responsibilities for
the following PEPs:
* PEP 499: Binding "-m" executed modules under their module name as
well as `__main__`
* PEP 574: Pickle protocol 5 with out of band data
I'm also pleased to report that Petr Viktorin has agreed to take on
the responsibility of reviewing the competing proposals to improve the
way CPython's C API exposes callables for direct invocation by third
party low level code:
* PEP 576: Exposing the internal FastCallKeywords convention to 3rd
* PEP 580: Revising the callable struct hierarchy internally and in
the public C API
* PEP 579: Background information for the problems the other two PEPs
are attempting to address
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
I've put the final touches to PEP 574 - Pickle protocol 5 with
out-of-band data (*). It is now ready for review. The implementation
is fully functional, as well as its PyPI backport (**), and has
regression tests against Numpy. Numpy and PyArrow have their own tests
against the pickle5 backport.
I have a crazy idea of getting unittest.mock up to 100% code coverage.
I noticed at the bottom of all of the test files in testmock/, there's a:
if __name__ == '__main__':
How would people feel about these going away? I don't *think* they're
needed now that we have unittest discover, but thought I'd ask.
Two weeks ago, I started a thread "No longer enable Py_TRACE_REFS by
default in debug build", but I lost myself in details, I forgot the
main purpose of my proposal...
Let me retry from scratch with a more explicit title: I would like to
be able to run C extensions compiled in release mode on a Python
compiled in debug mode ("pydebug"). The use case is to debug bugs in C
extensions thanks to additional runtime checks of a Python debug
build, and more generally get a better debugging experiences on
Python. Even for pure Python, a debug build is useful (to get the
Pyhon traceback in gdb using "py-bt" command).
Currently, using a Python compiled in debug mode means to have to
recompile C extensions in debug mode. Compile a C extension requires a
C compiler, header files, pull dependencies, etc. It can be very
complicated in practical (and pollute your system with all these
additional dependencies). On Linux, it's already hard, but on Windows
it can be even harder.
Just one concrete example: no debug build of numpy is provided at
https://pypi.org/project/numpy/ Good luck to build numpy in debug mode
manually (install OpenBLAS, ATLAS, Fortran compiler, Cython, etc.)
The first requirement for the use case is that a Python debug build
supports the ABI of a release build. The current blocker issue is that
the Py_DEBUG define imply the Py_TRACE_REFS define: PyObject gets 2
extra fields (_ob_prev and _ob_next) which change the offset of all
attributes of all objects and makes the ABI completely incompatible. I
propose to no longer imply Py_TRACE_REFS *by default* (but keep the
(Py_TRACE_REFS would be a different ABI.)
The second issue is that library filenames are different for a debug
build: SOABI gets an additional "d" flag for Py_DEBUG. A debug build
should first look for "NAME.cpython-38dm.so" (flags: "dm"), but then
also look for "NAME.cpython-38m.so" (flags: "m"). The opposite is not
possible: a debug build contains many additional functions missing
from a release build.
For Windows, maybe we should provide a Python compiled in debug mode
with the same C Runtime than a Python compiled in release mode.
Otherwise, the debug C Runtime is causing another ABI issue.
Maybe pip could be enhanced to support installing C extensions
compiled in release mode when using a debug mode. But that's more for
convenience, it's not really required, since it is easy to switch the
Python runtime between release and debug build.
Apart of Py_TRACE_REFS, I'm not aware of other ABI differences in
structures. I know that the COUNT_ALLOCS define changes the ABI, but
it's not implied by Py_DEBUG: you have to opt-in for COUNT_ALLOCS. (I
propose to do the same for Py_TRACE_REFS ;-))
Note: Refleaks buildbots don't use Py_TRACE_REFS to track memory
leaks, only sys.gettotalrefcount().
Python debug build has many benefit. If you ignore C extensions, the
debug build is usually compiled with compiler optimization disabled
which makes debugging in gdb a much better experience. If you never
tried: on a release build, most (if not all) variables are "<optimized
out>" and it's really painful to basic debug functions like displaying
the current Python frame.
Assertions are removed in release modes, whereas they can detect a
wide range of bugs way earlier: integer overflow, buffer under- and
overflow, exceptions ignored silently, etc. Nobody likes to see a bug
for the first time in production. For example, I modified Python 3.8
to now logs I/O errors when a file is closed implicitly, but only in
debug or development mode. In release Python silently ignored EBADF
error on such case, whereas it can lead to very nasty bugs causing
Python to call abort() (which creates a coredump on Linux): see
DeprecationWarning and ResourceWarning are shown by default in debug mode :-)
There are too many different additional checks done at runtime: I
cannot list them all here.
Being able to switch between Python in release mode and Python in
debug mode is a first step. My long term plan would be to better
separate "Python" from its "runtime". CPython in release mode would be
one runtime, CPython in debug mode would be another runtime, PyPy can
seeen as another runtime, etc. The more general idea is: "compile your
C extension once and use any Python runtime".
If you opt-in for the stable ABI, you can already switch between
runtimes of different Python versions (ex: Python 3.6 or Python 3.8).
Night gathers, and now my watch begins. It shall not end until my death.
Some time ago, I proposed adding a `.fromisocalendar` alternate
constructor to `datetime` (bpo-36004
<https://bugs.python.org/issue36004>), with a corresponding
implementation (PR #11888
<https://github.com/python/cpython/pull/11888>). I advertised it on
datetime-SIG some time ago but haven't seen much discussion there, so
I'd like to bring it to python-dev's attention as we near the cut-off
for new Python 3.8 features.
Other than the fact that I've needed this functionality in the past, I
also think a good general principle for the datetime module is that when
a class (time, date, datetime) has a "serialization" method (.strftime,
.timestamp, .isoformat, .isocalendar, etc), there should be a
corresponding /deserialization/ method (.strptime, .fromtimestamp,
.fromisoformat) that constructs a datetime from the output. Now that
`fromisoformat` was introduced in Python 3.7, I think `isocalendar` is
the only remaining method without an inverse. Do people agree with this
principle? Should we add the `fromisocalendar` method?