On 2018-05-19 11:15, mark wrote:
> PEP 576 aims to fulfill the same goals as PEP 575
(this is a copy of my comments on GitHub before this PEP was official)
Most importantly, changing bound methods of extension types from
builtin_function_or_method to bound_method will yield a performance
loss. It might be possible to mitigate this somewhat by adding specific
optimizations for calling bound_method. However, that would add extra
complexity and it will probably still be slower than the existing code.
And I would also like to know whether it will be possible for custom
built-in function subclasses to implement __get__ to change a function
into a method (like Python functions) and whether/how the LOAD_METHOD
opcode will work in that case.
When I want "introspection support", that goes beyond the call
signature. Also inspect.getfile should be supported. Currently, that
simply raises an exception for built-in functions.
I think it's important to specify the semantics of inspect.isfunction.
Given that you don't mention it, I assume that inspect.isfunction will
continue to return True only for Python functions. But that way, these
new function classes won't behave like Python functions.
> fully backwards compatible.
I wonder why you think it is "fully backwards compatible". Just like PEP
575, you are changing the classes of certain objects. I think it's
fairer to say that both PEP 575 and PEP 576 might cause minor backwards
compatibility issues. I certainly don't think that PEP 576 is
significantly more backwards compatible than PEP 575.
PS: in your PEP, you write "bound_method" but I guess you mean "method".
PEP 575 proposes to rename "method" to "bound_method".
On 2018-05-16 17:31, Petr Viktorin wrote:
> The larger a change is, the harder it is to understand
I already disagree here...
I'm afraid that you are still confusing the largeness of the *change*
with the complexity of the *result* after the change was implemented.
A change that *removes* complexity should be considered a good thing,
even if it's a large change.
That being said, if you want me to make smaller changes, I could do it.
But I would do it for *you* personally because I'm afraid that other
people might rightly complain that I'm making things too complicated.
So I would certainly like some feedback from others on this point.
> Less disruptive changes tend to have a better backwards compatibility story.
Maybe in very general terms, yes. But I believe that the "disruptive"
changes that I'm making will not contribute to backwards
incompatibility. Adding new ml_flags flags shouldn't break anything and
adding a base class shouldn't either (I doubt that there is code relying
on the fact that type(len).__base__ is object).
In my opinion, the one change that is most likely to cause backwards
compatibility problems is changing the type of bound methods of
extension types. And that change is even in the less disruptive PEP 576.
> Mark Shannon has an upcoming PEP with an alternative to some of the
I'm looking forward to a serious discussion about that. However, from a
first reading, I'm not very optimistic about its performance implications.
> Currently, the "outside" of a function (how it looks when introspected)
> is tied to the "inside" (what happens internally when it's called).
> Can we better enable pydoc/IPython developers to tackle introspection
> problems without wading deep in the internals and call optimizations?
I proposed complete decoupling in https://bugs.python.org/issue30071 and
that was rejected. Anyway, decoupling of introspection is not the
essence of this PEP. This PEP is really about allowing custom built-in
function subclasses. That's the hard part where CPython internals come
in. So I suggest that we leave the discussion about introspection and
focus on the function classes.
> But, it still has to inherit from base_function to "look like a
> function". Can we remove that limitation in favor of duck typing?
Duck typing is a Python thing, I don't know what "duck typing" would
mean on the C level. We could change the existing isinstance(...,
base_function) check by a different fast check. For example, we
(together with the Cython devs) have been pondering about a new "type"
field, say tp_cfunctionoffset pointing to a certain C field in the
object structure. That would work but it would not be so fundamentally
different from the current PEP.
*PS*: On friday, I'm leaving for 2 weeks on holidays. So if I don't
reply to comments on PEP 575 or alternative proposals, don't take it as
a lack of interest.
several of my PRs as well as local tests have started failing recently.
On my local Fedora 27 machine, four sendfile related tests of
test_asyncio's BaseLoopSockSendfileTests suite are failing reproducible.
For example Travis CI job
https://travis-ci.org/python/cpython/jobs/380852981 fails in:
* test_ignore (test.test_multiprocessing_forkserver.TestIgnoreEINTR)
Could somebody have a look, please?
On Fri, May 18, 2018 at 1:13 PM, Steve Dower <steve.dower(a)python.org> wrote:
> According to the VSTS dev team, an easy “rerun this build” button and
> filtering by changed paths are coming soon, which should clean things up.
If you're talking to them, please ask them to make sure that the
"rerun this build" button doesn't erase the old log. (That's what it
does on Travis. Appveyor is better.) The problem is that when you have
a flaky/intermittent failure, your todo list is always (a) rerun the
build so at least it's not blocking whatever this unrelated change is,
(b) file some sort of bug, or comment on some existing bug, and link
to the log to help track down the intermittent failure. If you click
the "rebuild" button on Travis, then it solves (a), while deleting the
information you need for (b) – and for rare intermittent bugs you
might not have much information to go on besides that build log.
Nathaniel J. Smith -- https://vorpus.org
On 5/18/2018 4:13 PM, Steve Dower wrote:
> Close/reopen PR is the best way to trigger a rebuild right now.
It may be the way to retrigger VSTS, but if one want to merge, and
either of Travis or AppVeyor pass, tossing the success is a foolish
thing to do. Either may fail on a rebuild.
Terry Jan Reedy
On Fri, May 18, 2018 at 4:15 PM Steve Dower <steve.dower(a)python.org> wrote:
> The asyncio instability is apparently really hard to fix. There were 2-3
people looking into it yesterday on one of the other systems, but
apparently we haven’t solved it yet (my guess is lingering state from a
previous test). The multissl script was my fault for not realising that we
don’t use it on 3.6 builds, but that should be fixed already. Close/reopen
PR is the best way to trigger a rebuild right now.
I asked Andrew Svetlov to help with asyncio CI triage. Hopefully we'll
resolve most of them early next week.
These both look like VSTS infrastructure falling over on PRs:
I don't see anywhere that gives information about the failures. (*)
These CI failures on different platforms are both for the same change, on
different branches, for a documentation only change. That passed on the
other branches and platforms for the same change.
(*) I refuse to "Download logs as a zip file". I'm in a web browser, if
the information I might need is potentially buried somewhere in a zip file
of logs, that is a waste of my time. I'm not going to do it. The web UI
*needs* to find and display the relevant failure info from any logs
Stephan Houben noticed that Python apparently allows identifiers to be
keywords, if you use Unicode "mathematical bold" letters. His
explanation is that the identifier is normalised, but not until after
keywords are checked for. So this works:
locals()['if'] = 1
Spam.𝐢𝐟 # U+1D422 U+1D41F
# returns 1
Of course Spam.if fails with SyntaxError.
Should this work? Is this a bug, a feature, or an accident of
implementation we can ignore?