Can you please take a look at the following issue and try to reproduce it?
The following tests sometimes hang on "x86 Ubuntu Shared 3.x" and
"AMD64 Debian root 3.x" buildbots:
- test_notify_all() of test_multiprocessing_spawn
- test_double_close_on_error() of test_subprocess
- other sporadic failures of test_subprocess
I'm quite sure that they are regressions, maybe related to the
implementation of the PEP 475. In the middle of all PEP 475 changes, I
changed some functions to release the GIL on I/O, it wasn't the case
before. I may be related.
Are you able to reproduce these issues? I'm unable to reproduce them
on Fedora 21. Maybe they are more likely on Debian-like operating
I looked into porting Python3 codecs module to MicroPython and saw
rather strange behavior, which is best illustrated with following
fun = _codecs.utf_8_encode
#fun = hash
#fun = str.upper
#fun = foo
meth = fun
b = Bar()
Uncommenting either _codecs.utf_8_encode or hash (both builtin
functions) produces 2 similar output lines, which in particular means
that its possible to call a native function as (normal) object method,
which then behaves as if it was a staticmethod - self is not passed to
a native function.
Using native object method in this manner produces error of self type
mismatch (TypeError: descriptor 'upper' for 'str' objects doesn't apply
to 'Bar' object).
And using standard Python function expectedly produces error about
argument number mismatch, because used as a method, function gets extra
So the questions are:
1. How so, the native functions exhibit such magic behavior? Is it
documented somewhere - I never read or heard about that (cannot say I
read each and every word in Python reference docs, but read enough. As
an example, https://docs.python.org/3/library/stdtypes.html#functions
is rather short and mentions difference in implementation, not in
2. The main question: how to easily and cleanly achieve the same
behavior for standard Python functions? I'd think it's staticmethod(),
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'staticmethod' object is not callable
(By "easily and cleanly" I mean without meta-programming tricks, like
instead of real arguments accept "*args, **kwargs" and then munge args).
The buildbot x86 XP-4 3.x doesn't compile anymore since 3 months or
more (maybe when Steve upgraded the Visual Studio project to VS 2015?
I don't know).
Would it be possible to fix this buildbot, or to turn it off?
By the way, do we seriously want to support Windows XP? I mean, *who*
will maintain it (no me sorry!). I saw recent changes to explicitly
*drop* support for Windows older than Visa (stop using GetTickCount,
always call GetTickCount64, for time.monotonic).
On 26.03.15 10:08, victor.stinner wrote:
> changeset: 5741:7daf3bfd9586
> user: Victor Stinner <victor.stinner(a)gmail.com>
> date: Thu Mar 26 09:08:08 2015 +0100
> New PEP 490: Chain exceptions at C level
> +Python 3.5 introduces a new private ``_PyErr_ChainExceptions()`` function which
> +is enough to chain manually exceptions.
It also was added in Python 3.4.3.
I meditar about adding _PyErr_ReplaceException() in 2.7 for simpler
backporting patches from 3.x.
> +Functions like ``PyErr_SetString()`` don't chain automatically exceptions. To
> +make usage of ``_PyErr_ChainExceptions()`` easier, new functions are added:
> +* PyErr_SetStringChain(exc_type, message)
> +* PyErr_FormatChaine(exc_type, format, ...)
> +* PyErr_SetNoneChain(exc_type)
> +* PyErr_SetObjectChain(exc_type, exc_value)
I would first make these functions private, as _PyErr_ChainExceptions().
After proofing their usefulness in the stdlib, they can be made public.
After reading this http://bugs.python.org/issue23085 and remembering
struggling having our own patches into cpython's libffi (but not into
libffi itself), I wonder, is there any reason any more for libffi
being included in CPython?
while a class is being initialized in a metaclass, it is not always possible to
call classmethods of the class, as they might use super(), which in turn uses
__class__, which is not initialized yet.
I know that this is a known issue, but well, sometimes it even makes sense
to fix already known issues... so I wrote a patch that moves the initialization
of __class__ into type.__new__, so that one may use super() in a class
once it starts existing. It's issue 23722 on the issue tracker.
To illustrate what the problem is, the following code raises a RuntimeError:
def __init__(self, name, bases, dict):
super().__init__(name, bases, dict)
it works fine with my patch applied.
Technically, my patch slightly changes the semantics of python if a metaclass
returns something different in its __new__ than what type.__new__ returns.
But I could not find any documentation of the current behavior, and also the
tests don't test for it, and I consider the current behavior actually buggy.
As an example let me give the following simple Singleton metaclass:
def __new__(cls, name, bases, dict):
return super().__new__(cls, name, bases, dict)()
The current python fails the assertion, while with my patch everything is fine,
and I personally think __class__ should always actually refer to the class being
defined, which means at the minimum that it is actually, well, a class.
We're starting a discussion in Fedora about setting the default shbang for
system python executables and/or daemons to python -s or python -Es (or ?).
Basically we're wanting to avoid locally installed items causing security
issues or other bad behavior, without too adversely affecting users'
abilities to work around issues or intentionally alter behavior.
It would be good to get some feedback from the broader python community
before implementing anything, so I'm asking for feedback here.