On 2018-07-07 10:55, Serhiy Storchaka wrote:
> The first part of
> handling arguments can be made outside of the C function, by the calling
Sure, it could be done but I don't see the advantage. I don't think you
will gain performance because you are just moving code from one place to
another. And how do you plan to deal with *args and **kwds in your
proposal? You'll need to make sure that this doesn't become slower.
We seem to have a plethora of PEPs where we really ought to have one (or
Traditionally when writing a new piece of software, one gathered
requirements before implementing the code. Let us return to that
IMO, mailing lists are a terrible way to do software design, but a good
way to gather requirements as it makes less likely that someone will be
So, let us gather the requirements for a new calling API.
Here are my starting suggestions:
1. The new API should be fully backwards compatible and shouldn't break
2. The new API should be used internally so that 3rd party extensions
are not second class citizens in term of call performance.
3. The new API should not prevent 3rd party extensions having full
introspection capabilities, supporting keyword arguments or another
feature supported by Python functions.
4. The implementation should not exceed D lines of code delta and T
lines of code in total size. I would suggest +200 and 1000 for D and T
respectively (or is that too restrictive?).
5. It should speed up CPython for the standard benchmark suite.
6. It should be understandable.
What am I missing? Comments from the maintainers of Cython and other
similar tools would be appreciated.
After review from Barry & Guido, I've just merged an update to PEP 1
and the PEP index generator that separates out provisionally accepted
PEPs to their own state in the PEP flow:
To date, that status has been reported as "Accepted" both in the
original PEPs and in the main PEP index, now it gets reported as
Provisional in both places.
P.S. As part of this, I switched the flow diagram in PEP 1 from a PNG
to an SVG, which seems to have confused python.org's image rendering:
I'm already looking into it, but am open to tips from folks more
familiar with the website's rendering machinery.
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
On 2018-07-07 14:54, Mark Shannon wrote:
> There is a minimal implementation and has been for a while.
> There is a link at the bottom of the PEP.
Yes, I saw that but the implementation does not correspond to the PEP.
In particular, this sentence from the PEP has not been implemented:
When binding a method_descriptor instance to an instance of its owning
class, a bound_method will be created instead of a
It's not clear to me whether you still want to implement that or whether
it should be dropped from the PEP.
> PEP 576 adds a new calling convention which can be used by *any* object.
> Seems quite extensible to me.
Yes and no. Yes, it can do anything. But because it can do anything,
callers cannot optimize certain special cases. For example, in PEP 576
you need an extra flag Py_TPFLAGS_FUNCTION_DESCRIPTOR because your
protocol doesn't specify anything about __get__. Imagine that you want
to support more optimizations like that in the future, how do you plan
to do that? Of course, you can always add more stuff to PyTypeObject,
but a separate structure like what I propose in PEP 580 might make more
On 06.07.2018 1:40, Guido van Rossum wrote:
> Thanks you for writing up a proposal. There have been many proposals
> made, including 'EXPR as NAME', similar to yours. It even has a small
> section in the PEP:
> https://www.python.org/dev/peps/pep-0572/#alternative-spellings. It's
> really hard to choose between alternatives, but all things considered
> I have decided in favor of `NAME := EXPR` instead. Your efforts are
> appreciated but you would just be wasting your time if you wrote a
> PEP. If you're interested in helping out, would you be interested in
> working on the implementation of PEP 572?
Maybe we should call for subj? Not a day most probably, rather however
much time is needed.
AFAICS, all the arguments have already been told and retold. So we
should probably give Guido some peace of mind until he officially
accepts the PEP or whatever he decides.
On 2018-07-05 21:57, Guido van Rossum wrote:
> Would it be possible to get outside experts to help?
I don't understand what you mean: to help with what? Designing the PEP?
Discussing the PEP? Accepting the PEP? Lobbying Python core devs?
The Cython developers (in particular Stefan Behnel) certainly support my
work. I have talked with them in person at a workshop and they posted a
few emails to python-dev and they also gave me some personal comments
about PEP 580.
As for NumPy, one obvious place where some ideas of PEP 579 could be
applied is ufuncs. However, typically ufuncs are not *called* in tight
loops, they implement tight loops internally. So I don't think that
there is pressing need for anything like PEP 580 in NumPy.
On 2018-07-06 23:12, Guido van Rossum wrote:
> It's your PEP. And you seem to be struggling with something. But I can't
> tell quite what it is you're struggling with.
To be perfectly honest (no hard feelings though!): what I'm struggling
with is getting feedback (either positive or negative) from core devs
about the actual PEP 580.
> At the same time I assume you want your PEP accepted.
As I also said during the PEP 575 discussion, my real goal is to solve a
concrete problem, not to push my personal PEP. I still think that PEP
580 is the best solution but I welcome other suggestions.
> And how do they feel about PEP 576? I'd like to see some actual debate
> of the pros and cons of the details of PEP 576 vs. PEP 580.
I started this thread to do precisely that.
My opinion: PEP 580 has zero performance cost, while PEP 576 does make
performance for bound methods worse (there is no reference
implementation of the new PEP 576 yet, so that's hard to quantify for
now). PEP 580 is also more future-proof: it defines a new protocol which
can easily be extended in the future. PEP 576 just builds on PyMethodDef
which cannot be extended because of ABI compatibility (putting
__text_signature__ and __doc__ in the same C string is a good symptom of
that). This extensibility is important because I want PEP 580 to be the
first in a series of PEPs working out this new protocol. See PEP 579 for
the bigger picture.
One thing that might count against PEP 580 is that it defines a whole
new protocol, which could be seen as too complicated. However, it must
be this complicated because it is meant to generalize the current
behavior and optimizations of built-in functions and methods. There are
lots of little tricks currently in CPython that must be "ported" to the
> OK, so is it your claim that the NumPy developers don't care about which
> one of these PEPs is accepted or even whether one is accepted at all?
I don't know, I haven't contacted any NumPy devs yet, so that was just
my personal feeling. These PEPs are about optimizing callables and NumPy
isn't really about callables. I think that the audience for PEP 580 is
mostly compilers (Cython for sure but possibly also Pythran, numba,
cppyy, ...). Also certain C classes like functools.lru_cache could
benefit from it.
> Yet earlier in
> *this* thread you seemed to claim that PEP 580 requires changes ro
I don't know what you mean with that. But maybe it's also confusing
because "FASTCALL" can mean different things: it can refer to a
PyMethodDef (used by builtin_function_or_method and method_descriptor)
with the METH_FASTCALL flag set. It can also refer to a more general API
like _PyCFunction_FastCallKeywords, which supports METH_FASTCALL but
also other calling conventions like METH_VARARGS.
I don't think that METH_FASTCALL should be changed (and PEP 580 isn't
really about that at all). For the latter, I'm suggesting some API
changes but nothing fundamental: mainly replacing the 5 existing private
functions _PyCFunction_FastCallKeywords, _PyCFunction_FastCallDict,
_PyMethodDef_RawFastCallDict by 1 public function PyCCall_FASTCALL.
Hopefully this clears some things up,
On Fri, Jul 6, 2018 at 12:48 PM, Alexander Belopolsky
> Python really has a strong C legacy and this is the area where I agree that
> C designers made a mistake by picking a symmetric symbol (=) for an
> asymmetric operation. On top of that, they picked an asymmetric digraph (!=)
> for a symmetric operation as well and Python (unfortunately) followed the
> crowd and ditched a much better alternative (<>). My only hope is that
> Python 4.0 will allow ← to be used in place of either = or :=. :-)
Interesting. Looking over Python's binary operators, we have:
|, ^, &, +, *: symmetric (on ints)
-, /, //, **: asymmetric
<, >: mirrored operations
<=, >=: mirrored operations but not reflected
<<, >>: non-mirrored asymmetric
and, or: technically asymmetric but often treated as symmetric
in, not in: asymmetric
is, is not: symmetric
Which ones ought to have symmetric symbols, in an ideal world? Should
<= and >= be proper mirrors of each other? Are << and >> confusing? Is
it a problem that the ** operator is most decidedly asymmetric?
Personally, I'm very happy that the operators use the same symbols
that they do in other languages - U+002B PLUS SIGN means addition, for
instance - and everything else is secondary. But maybe this is one of
those "hidden elegances" that you're generally not *consciously* aware
of, but which makes things "feel right", like how Disney's "Moana" has
freedom to the right of the screen and duty to the left. Are there
languages where symmetric operations are always represented with
symmetric symbols and vice versa?
Apparently I have made it into "club of three who don't care much about
opinions of others" for the crime of a single +0.5 for PEP-572 without
participating in the discussion at all (neither did Jason).
This is a new high for Twitter gossip. Well done. Perhaps in the next vote
the politbureau can indicate the intended outcome beforehand so we know how