webmaster has already heard from 4 people who cannot install it.
I sent them to the bug tracker or to python-list but they seem
not to have gone either place. Is there some guide I should be
sending them to, 'how to debug installation problems'?
Laura
Hi,
My question is simple: do we officially support Solaris and/or OpenIndiana?
Jesus Cea runs an OpenIndiana buildbot slave:
http://buildbot.python.org/all/buildslaves/cea-indiana-x86
"Open Indiana 32 bits"
The platform module of Python says "Solaris-2.11", I don't know the
exact OpenIndiana version.
A lot of unit tests fail on this buildbot with MemoryError. I guess
that it's related to Solaris which doesn't allow overcommit
(allocating more memory than available memory on the system), or more
simply because the slave has not enough memory.
There is now an issue which seems specific to OpenIndiana:
http://bugs.python.org/issue27847
It might impact Solaris as well, but the Solaris buildbot is offline
since "684 builds".
Five years ago, I reported a bug because the curses module of Python 3
doesn't build on Solaris nor OpenIndiana anymore. It seems like the
bug was not fixed, and the issue is still open:
http://bugs.python.org/issue13552
So my question is if we officially support Solaris and/or OpenIndiana.
If yes, how can we fix issues when we only have buildbot slave which
has memory errors, and no SSH access to this server?
Solaris doesn't seem to be officially supported in Python, so I
suggest to drop the OpenIndiana buildbot (which is failing since at
least 2 years) and close all Solaris issues as "WONTFIX".
Victor
Classes that doesn't define the __format__ method for custom PEP 3101
formatting inherits it from parents.
Originally the object.__format__ method was designed as [1]:
def __format__(self, format_spec):
return format(str(self), format_spec)
An instance is converted to string and resulting string is formatted
according to format specifier.
Later this design was reconsidered [2], and now object.__format__ is
equivalent to:
def __format__(self, format_spec):
assert format_spec == ''
return format(str(self), '')
Non-empty format specifier is rejected.
But why call format() on resulting string? Why not return resulting
string as is? object.__format__ could be simpler (not just
implementation, but for understanding):
def __format__(self, format_spec):
assert format_spec == ''
return str(self)
This can change the behaviour in corner case. str(self) can return not
exact string, but string subclass with overloaded __format__. But I
think we can ignore such subtle difference.
[1] https://www.python.org/dev/peps/pep-3101/
[2] http://bugs.python.org/issue7994
Hi,
The PyObject_CallFunction() function has a special case when the
format string is "O", to pass exactly one Python object:
* If the argument is a tuple, the tuple is unpacked: it behaves like func(*arg)
* Otherwise, it behaves like func(arg)
This case is not documented in the C API !
https://docs.python.org/dev/c-api/object.html#c.PyObject_CallFunction
The following C functions have the special case:
* PyObject_CallFunction(), _PyObject_CallFunction_SizeT()
* PyObject_CallMethod(), _PyObject_CallMethod_SizeT()
* _PyObject_CallMethodId(), _PyObject_CallMethodId_SizeT()
I guess that it's a side effect of the implementation: the code uses
Py_BuildValue() and then checks if the value is a tuple or not.
Py_BuildValue() is a little bit surprising:
* "i" creates an integer object
* "ii" creates a tuple
* "(i)" and "(ii)" create a tuple.
Getting a tuple or not depends on the length of the format string. It
is not obvious when you have nested tuples like "O(OO)".
Because of the special case, passing a tuple as the only argument
requires to write "((...))" instead of just "(...)".
In the past, this special behaviour caused a bug in
generator.send(arg), probably because the author of the C code
implementing generator.send() wasn't aware of the special case. See
the issue:
http://bugs.python.org/issue21209
I found code using "O" format in the new _asyncio module, and I'm
quite sure that unpacking special case is not expected. So I opened an
issue:
http://bugs.python.org/issue28920
Last days, I patched functions of PyObject_CallFunction() family to
use internally fast calls. I implemented the special case to keep
backward compatibility.
I replaced a lot of code using PyObject_CallFunction() with
PyObject_CallFunctionObjArgs() when the format string was only made of
"O", PyObject* arguments. I made this change to optimize the code, but
indirectly, it avoids also the special case for code which used
exactly "O" format. See:
http://bugs.python.org/issue28915
When I made these changes, I found some functions which rely the
unpacking feature!
* time_strptime() (change 49a7fdc0d40a)
* unpickle() of _ctypes (change ceb22b8f6d32)
I don't know well what we are supposed to do. I don't think that
changing the behaviour of PyObject_CallFunction() to remove the
special case is a good idea. It would be an obvious backward
incompatible change which can break applications.
I guess that the minimum is to document the special case?
Victor
Hi all,
It's an old feature of the weakref API that you can define an
arbitrary callback to be invoked when the referenced object dies, and
that when this callback is invoked, it gets handed the weakref wrapper
object -- BUT, only after it's been cleared, so that the callback
can't access the originally referenced object. (I.e., this callback
will never raise: def callback(ref): assert ref() is None.)
AFAICT the original motivation for this seems was that if the weakref
callback could get at the object, then the weakref callback would
effectively be another finalizer like __del__, and finalizers and
reference cycles don't mix, so weakref callbacks can't be finalizers.
There's a long document from the 2.4 days about all the terrible
things that could happen if arbitrary code like callbacks could get
unfettered access to cyclic isolates at weakref cleanup time [1].
But that was 2.4. In the mean time, of course, PEP 442 fixed it so
that finalizers and weakrefs mix just fine. In fact, weakref callbacks
are now run *before* __del__ methods [2], so clearly it's now okay for
arbitrary code to touch the objects during that phase of the GC -- at
least in principle.
So what I'm wondering is, would anything terrible happen if we started
passing still-live weakrefs into weakref callbacks, and then clearing
them afterwards? (i.e. making step 1 of the PEP 442 cleanup order be
"run callbacks and then clear weakrefs", instead of the current "clear
weakrefs and then run callbacks"). I skimmed through the PEP 442
discussion, and AFAICT the rationale for keeping the old weakref
behavior was just that no-one could be bothered to mess with it [3].
[The motivation for my question is partly curiosity, and partly that
in the discussion about how to handle GC for async objects, it
occurred to me that it might be very nice if arbitrary classes that
needed access to the event loop during cleanup could do something like
def __init__(self, ...):
loop = asyncio.get_event_loop()
loop.gc_register(self)
# automatically called by the loop when I am GC'ed; async equivalent
of __del__
async def aclose(self):
...
Right now something *sort* of like this is possible but it requires a
much more cumbersome API, where every class would have to implement
logic to fetch a cleanup callback from the loop, store it, and then
call it from its __del__ method -- like how PEP 525 does it. Delaying
weakref clearing would make this simpler API possible.]
-n
[1] https://github.com/python/cpython/blob/master/Modules/gc_weakref.txt
[2] https://www.python.org/dev/peps/pep-0442/#id7
[3] https://mail.python.org/pipermail/python-dev/2013-May/126592.html
--
Nathaniel J. Smith -- https://vorpus.org
Hi all,
Well, we finally got that ucs2/ucs4 stuff all sorted out (yay), but I
just learned that there's another CPython build flag that also changes
the ABI: --with-fpectl
Specifically, it seems that if you build CPython with --with-fpectl,
and then use that CPython to build an extension module, and that
extension module uses PyFPE_{START,END}_PROTECT (like e.g. Cython
modules do), and you then try to import that extension module on a
CPython that *wasn't* built with --with-fpectl, then it will crash.
This bug report has more gory details:
https://github.com/numpy/numpy/issues/8415
The reverse is OK -- extensions built in a no-fpectl CPython can be
imported by both no-fpectl and yes-fpectl CPythons.
So one consequence is easy -- we need to make a note in the manylinux1
definition saying that you have to use a no-fpectl CPython build to
make manylinux1 wheels, because that's the only way to guarantee
compatibility. I just submitted a PR for this:
https://github.com/python/peps/pull/166
(Fortunately the manylinux1 docker images are already set up this way,
so in practice I think everyone is already doing this.)
But... in general this is kind of an unfortunate issue, and it's not
restricted to Linux. Should we do something? Some options:
Add another ABI flag -- e.g. cp35dmf vs. cp35dm? Though AFAICT the
offending macros are actually part of the allegedly stable ABI (!!),
so I guess this isn't really a solution. (I'm not 100% confident about
how to tell whether something is part of the stable ABI, but: Python.h
unconditionally imports pyfpe.h, and pyfpe.h doesn't have any
Py_LIMITED_API checks.)
Drop support for fpectl entirely in 3.7 on the grounds that it's not
worth the trouble? (The docs have said "don't use this" at the top
forever[1], and after clicking through every hit on github code search
for language = Python and string "turnon_sigfpe" [2], I found exactly
4 non-documentation usages [3], all of which are already broken in
no-fpectl builds.) We obviously can't do this in a point release
though, because there are lots of extant extension modules referencing
these symbols.
Or maybe make it so that even no-fpectl builds still export the
necessary symbols so that yes-fpectl extensions don't crash on import?
(This has the advantage that it can be done in a point release...)
Thoughts?
-n
[1] https://docs.python.org/2/library/fpectl.html
[2] https://github.com/search?l=Python&p=1&q=turnon_sigfpe&type=Code&utf8=%E2%9…
[3]
https://github.com/podhrmic/JSBSim/blob/36de9ac63c959cef5d7b2c56fb49c1a57fd…https://github.com/tmbdev/iuprlab/blob/239918b5ec0f8deecbc7c2ec1283a837d11a…https://github.com/wcs211/BEM-3D-Python/blob/874aaeffc3dac5f698f875478c3acc…https://github.com/neobonzi/SoundPlagiarism/blob/7cff7f0145217bffb3a3cebd59…
--
Nathaniel J. Smith -- https://vorpus.org
Just a reminder: I'll be tagging 3.5.3 rc1 and 3.4.6 rc1 tomorrow, Jan 1
2017, sometime between 24 and 36 hours from now. Please work quickly if
there's anything you need to get in to either of those releases. I'm
hoping that, for once, there are literally no code changes between rc1
and final.
Best wishes for a happy new year,
//arry/
From the documentation:
https://docs.python.org/3/c-api/stable.html
In the C API documentation, API elements that are not part of the
limited API are marked as "Not part of the limited API."
But they don't.
I prepared a sample patch that adds the notes to Unicode Objects and
Codecs C API (http://bugs.python.org/issue29086). I'm going to add them
to all C API.
What are your though about formatting the note? Should it be before the
description, after the description, but before the "deprecated"
directive (as in the patch), or after the first paragraph of the
description? Should it be on the separate line or be appended at the end
of the previous paragraph, or starts the first paragraph (if before the
description)? May be introduce a special directive for it?