Hi,
We have a "PPC64 AIX 3.x" buildbot slave which fails on cloning the
GitHub repository: "SSL certificate problem: unable to get local
issuer certificate". It started to fail around Feb 11, 2017 (Build
#294), probably when buildbots moved to GitHub, after CPython moved to
GitHub.
First build which failed:
http://buildbot.python.org/all/builders/PPC64%20AIX%203.x/builds/294
Moreover, some tests are failing since at least 2 years on AIX. Some examples:
* test_locale.…
[View More]test_strcoll_with_diacritic()
* test_socket.testIPv4toString()
* test_strptime
Last build which ran unit tests:
http://buildbot.python.org/all/builders/PPC64%20AIX%203.x/builds/293/steps/…
For me, the principle of a CI is to detect *regressions*. But the AIX
buildbot is always failing because of known bugs. There are 3 options:
* Find a maintainer who quickly fix all known bugs. Unlike
* Skip tests known to fail on AIX. I know that many core developers
dislike this option, "hiding" bugs.
* Drop the buildbot
My favorite option is to drop the buildbot, since I'm tired of these
red buildbot slaves.
I added David Edelsohn to this email, he owns the buildbot, and I know
that he wants to get the best AIX support in Python.
The question is also which kind of support level do we provide per
platform? Full support, "best effort" or no support?
* Full support requires active maintainers, a CI with tests which pass, etc.
* "Best effort": fix bugs when someone complains and someone (else?)
provides a fix
* No support: reject proposed patches to add a partial support for a platform.
Victor
[View Less]
Hello,
I already tried to ask on python-list, see
https://mail.python.org/pipermail/python-list/2017-March/720037.html
but it seems that this list is not for technical questions.
Let me resend my question to python-dev. Please tell me if I should not spam
this list with newbiesh questions, and thanks in advance.
---------------------------------------------------------------------------
I started to learn python a few days ago and I am trying to understand what
__del__() actually does. https:…
[View More]//docs.python.org/3/reference/datamodel.html
says:
object.__del__(self)
...
Note that it is possible (though not recommended!) for the __del__()
method to postpone destruction of the instance by creating a new
reference to it. It may then be called at a later time when this new
reference is deleted.
However, this trivial test-case
class C:
def __del__(self):
print("DEL")
global X
X = self
C()
print(X)
X = 0
print(X)
shows that __del__ is called only once, it is not called again after "X = 0":
DEL
<__main__.C object at 0x7f067695f4a8>
0
(Just in case, I verified later that this object actually goes away and its
memory is freed, so the problem is not that it still has a reference).
I've cloned https://github.com/python/cpython.git and everything looks clear
at first glance (but let me repeat that I am very new to python):
PyObject_CallFinalizerFromDealloc() calls PyObject_CallFinalizer()
which finally calls "__del__" method in slot_tp_finalize(), then it
notices that "X = self" creates the new reference and does:
/* tp_finalize resurrected it! Make it look like the original Py_DECREF
* never happened.
*/
refcnt = self->ob_refcnt;
_Py_NewReference(self);
self->ob_refcnt = refcnt;
However, PyObject_CallFinalizer() also does _PyGC_SET_FINALIZED(self, 1)
and that is why __del__ is not called again after "X = 0":
/* tp_finalize should only be called once. */
if (PyType_IS_GC(tp) && _PyGC_FINALIZED(self))
return;
The comment and the code are very explicit, so this does nt look like a
bug in cpython.
Probably the docs should be fixed?
Or this code is actually wrong? The test-case works as documented if I
remove _PyGC_SET_FINALIZED() in PyObject_CallFinalizer() or add another
_PyGC_SET_FINALIZED(self, 0) into PyObject_CallFinalizerFromDealloc()
after _Py_NewReference(self), but yes, yes, I understand that this is
not correct and won't really help.
Oleg.
[View Less]
I am building a Python JIT, so I want to change the interp->eval_frame to
my own function.
I built a C++ library which contains EvalFrame function, and then use dlopen
and dlsym to use it. It looks like this:
extern "C" PyObject *EvalFrame(PyFrameObject *f, int throwflag) {
return _PyEval_EvalFrameDefault(f, throwflag);}
I added following code to Python/pylifecycle.c at function _Py_InitializeEx_
Private(Python version is 3.6.1):
void *pyjit = NULL;
pyjit = dlopen("../cmake-build-…
[View More]debug/libPubbon.dylib", 0);if (pyjit != NULL) {
interp->eval_frame = (_PyFrameEvalFunction)dlsym(pyjit, "EvalFrame");
//interp->eval_frame = _PyEval_EvalFrameDefault;}
Then something strange happened. I used LLDB to trace the variables. When
it ran at EvalFrame, the address of f pointer didn't change, but f->f_lineno
changed.
Why the address of the pointer didn't change, but the context change?
I am working on Mac OS X and Python 3.6.1. I want to know how to replace
_PyEval_EvalFrameDefault in interp->eval_frame with my own function.
[View Less]