I've been trying to figure out how to access the archives programmatically.
I'm sure this is easy once you know, but googling various things hasn't
worked. What I want to do is graph the number of messages about PEP 572 by
time. (or has someone already done that?)
I installed GNU Mailman, and downloaded the gzip'ed archives for a number
of months and unzipped them, and I suspect that there's some way to get
them all into a single database, but it hasn't jumped out at me. If I
count the "Message-ID" lines, the "Subject:" lines, and the "\nFrom " lines
in one of those text files, I get slightly different numbers for each.
Alternatively, they're maybe *already* in a database, and I just need API
access to do the querying? Can someone help me out?
On 2018-07-31 12:56, Victor Stinner wrote:
> We try to make CPython build as simple as possible. I'm quite sure
> that Cython rely on the stdlib.
It does rely on modules like "re" and "functools".
> Would depending on Cython open a
> chicken-and-egg issue?
Yes, that's a problem but it's not unsolvable. For example, we could use
the PEP 399 pure Python modules for running Cython. Or we could keep
certain "core" C modules (which are used by Cython) implemented directly
Note that Cython is not all-or-nothing: it is easy to mix pure Python
modules, Cython modules and pure C modules. You can also combine pure C
code and Cython code in the same module.
Anyway, I know that this is probably not going to happen, but I just
wanted to bring it up in case people would find it a great idea. But
maybe not many CPython core developers actually know and use Cython?
> I would be nice to be able to use something to "generate" C
> extensions, maybe even from pure Python code.
Cython has a "pure Python mode" which does exactly that. There are
several ways to include typing information, to ensure that a module
remains Python-compatible but can be compiled by Cython in an optimized way.
As some people here know I've been working off and on for a while to
improve CPython's support of Cygwin. I'm motivated in part by a need
to have software working on Python 3.x on Cygwin for the foreseeable
future, preferably with minimal graft. (As an incidental side-effect
Python's test suite--especially of system-level functionality--serves
as an interesting test suite for Cygwin itself too.)
This is partly what motivated PEP 539 , although that PEP had the
advantage of benefiting other POSIX-compatible platforms as well (and
in fact was fixing an aspect of CPython that made it unfriendly to
supporting other platforms).
As far as I can tell, the first commit to Python to add any kind of
support for Cygwin was made by Guido (committing a contributed patch)
back in 1999 . Since then, bits and pieces have been added for
Cygwin's benefit over time, with varying degrees of impact in terms of
#ifdefs and the like (for the most part Cygwin does not require *much*
in the way of special support, but it does have some differences from
a "normal" POSIX-compliant platform, such as the possibility for
case-insensitive filesystems and executables that end in .exe). I
don't know whether it's ever been "officially supported" but someone
with a longer memory of the project can comment on that. I'm not sure
if it was discussed at all or not in the context of PEP 11.
I have personally put in a fair amount of effort already in either
fixing issues on Cygwin (many of these issues also impact MinGW), or
more often than not fixing issues in the CPython test suite on
Cygwin--these are mostly tests that are broken due to invalid
assumptions about the platform (for example, that there is always a
"root" user with uid=0; this is not the case on Cygwin). In other
cases some tests need to be skipped or worked around due to
platform-specific bugs, and Cygwin is hardly the only case of this in
the test suite.
I also have an experimental AppVeyor configuration for running the
tests on Cygwin , as well as an experimental buildbot (not
available on the internet, but working). These currently rely on a
custom branch that includes fixes needed for the test suite to run to
completion without crashing or hanging (e.g.
https://bugs.python.org/issue31885). It would be nice to add this as
an official buildbot, but I'm not sure if it makes sense to do that
until it's "green", or at least not crashing. I have several other
patches to the tests toward this goal, and am currently down to ~22
Before I do any more work on this, however, it would be best to once
and for all clarify the support for Cygwin in CPython, as it has never
been "officially supported" nor unsupported--this way we can avoid
having this discussion every time a patch related to Cygwin comes up.
I could provide some arguments for why I believe Cygwin should
supported, but before this gets too long I'd just like to float the
idea of having the discussion in the first place. It's also not
exactly clear to me how to meet the standards in PEP 11 for supporting
a platform--in particular it's not clear when a buildbot is considered
"stable", or how to achieve that without getting necessary fixes
merged into the main branch in the first place.
On 2018-07-31 09:36, INADA Naoki wrote:
> I think PEP 580 is understandable only for people who tried to implement
> method objects.
Is this really a problem? Do we expect that all Python developers can
understand all PEPs, especially on a technical subject like this?
To give a different example, I would say that PEP 567 is also quite
technical and not understandable by people who don't care about about
If PEP 580 is accepted, we can make it very clear in the documentation
that this is only meant for implementing fast function/method classes
and that ordinary "extension writers" can safely skip that part. For
example, you write
> They should learn PyCCallDef and CCALL_* flags in addition
> to PyMethodDef and METH_*.
but that's not true: they can easily NOT learn those flags, just like
they do NOT need to learn about context variables if they don't need them.
>> I would like to stress that PEP 580 was designed for maximum
>> performance, both today and for future extensions (such as calling with
>> native C types).
> I don't know what the word *stress* mean here. (Sorry, I'm not good at English
> enough for such hard discussion).
> But I want to see PoC of real benefit of PEP 580, as I said above.
"to stress" = to draw attention to, to make it clear that
So, PEP 580 is meant to keep all existing optimizations for
functions/methods. It can also be extended in the future (for example,
to support direct C calling) by just adding extra flags and structure
fields to PyCCallDef.
> Hm, My point was providing easy and simple way to support FASTCALL
> in callable object like functools.partial or functools.lru_cache.
That can be done easily with only PEP 580.
On 2018-07-31 15:34, Victor Stinner wrote:
> But I never used Cython nor cffi, so I'm not sure which one is the
> most appropriate depending on the use case.
Cython is a build-time tool, while cffi is a run-time tool.
But Cython does a lot more than just FFI. It is a Python->C compiler
which can be used for FFI but also for many other things.
> A major "rewrite" of such large code base is
> very difficult since people want to push new things in parallel. Or is
> it maybe possible to do it incrementally?
Yes, that's not a problem: you can easily mix pure Python code, Cython
code and C code.
I think that this kind of mixing is an important part in Cython's
philosophy: for stuff where you don't care about performance: use
Python. For most stuff where you do care: use Cython. For very
specialized code which cannot easily be translated to Cython: use C.
On 2018-07-31 12:10, INADA Naoki wrote:
> Surely, they should understand they must use CCALL_* flags instead of
> METH_* flags when implementing fast-callable object.
Yes indeed. But implementing a fast-callable object is quite
specialized, not something that ordinary extension writers would care
about. And if they don't care about performance, tp_call remains supported.
More generally: with PEP 580, everything from the documented C API
remains supported. So people can write extensions exactly as before.
They only need to care about PEP 580 if they want to use the new
features that PEP 580 adds (or if they used undocumented internals).
On 2018-07-31 12:10, INADA Naoki wrote:
> After spent several days to read PEP 580 and your implementation, I think
> I can implement it. I think it's not easy, but it's not impossible too.
The signature of "extended_call_ptr" in PEP 576 is almost the same as
the signature of a CCALL_FUNCARG|CCALL_FASTCALL|CCALL_KEYWORDS function
in PEP 580 (the only difference is a "self" argument which can be
ignored if you don't need it).
So, if you can implement it using PEP 576, it's not a big step to
implement it using PEP 580.
On 2018-07-31 11:12, INADA Naoki wrote:
> For me, this is the most important benefit of PEP 580. I can't split
> it from PEP 580.
I want PEP 580 to stand by itself. And you say that it is already
complicated enough, so we should not mix native C calling into it.
PEP 580 is written to allow future extensions like that, but it should
be reviewed without those future extensions.
On 2018-07-31 09:36, INADA Naoki wrote:
> I want to see PoC of direct C calling.
To be honest, there is no implementation plan for this yet. I know that
several people want this feature, so it makes sense to think about it.
For me personally, the main open problem is how to deal with arguments
which may be passed both as Python object or as native C type. For
example, when doing a function call like f(1,2,3), it may happen that
the first argument is really a Python object (so it should be passed as
Python int) but that the other two arguments are C integers.
> And I think PoC can be implemented without waiting PEP 580.
For one particular class (say CyFunction), yes. But this feature would
be particularly useful for calling between different kinds of C code,
for example between Numba and CPython built-ins, or between Pythran and
That is why I think it should be implemented as an extension of PEP 580.
Anyway, this is a different subject that we should not mix in the
discussion about PEP 580 (that is also why I am replying to this
specific point separately).
First of all, I'm sorry to I forgot change my mail title.
(I though about reserving one more slot for Cython for
further Cython-to-Cython call optimization, but I rejected
my idea because I'm not sure it really help Cython.)
On Mon, Jul 30, 2018 at 11:55 PM Jeroen Demeyer <J.Demeyer(a)ugent.be> wrote:
> On 2018-07-30 15:35, INADA Naoki wrote:
> > As repeatedly said, PEP 580 is very complicated protocol
> > when just implementing callable object.
> Can you be more concrete what you find complicated? Maybe I can improve
> the PEP to explain it more. Also, I'm open to suggestions to make it
> less complicated.
When thinking from extension writer's point of view, almost all of PEP 580 is
complicated comparing PEP 576. Remember they don't need custom method/function
type. So PEP 576/580 are needed only when implementing callable object,
like itemgetter or lru_cache in stdlib.
* We continue to use PyMethodDef and METH_* when writing
tp_methods. They should learn PyCCallDef and CCALL_* flags in addition
to PyMethodDef and METH_*.
* In PEP 576, just put function pointer to type slot. On the other hand, when
implementing callable object with PEP 580, (1) Put PyCCallDef somewhere,
(2) Put CCallRoot in instance, (3) put offset of (2) to tp_ccall.
* Difference between cc_parent and cc_self are unclear too.
I think PEP 580 is understandable only for people who tried to implement
method objects. It's complete rewrite of PyCFunction and method_descriptor.
But extension author can write extension without knowing implementation of
> > It is optimized for implementing custom method object, although
> > almost only Cython want the custom method type.
> For the record, Numba also seems interested in the PEP:
OK, Numba developer interested in:
* Supporting FASTCALL for Dispatcher type: PEP 576 is more simple
for it as I described above.
* Direct C function calling (skip PyObject calling abstraction).
While it's not part of PEP 580, it's strong motivation for PEP 580.
I want to see PoC of direct C calling.
And I think PoC can be implemented without waiting PEP 580.
* Cython can have specialization for CyFunction, like it have for CFunction.
(Note that Cython doesn't utilize LOAD_METHOD / CALL_METHOD for
CFunction too. So lacking support for LOAD_METHOD / CALL_METHOD
is not a big problem for now.)
* Cython can implement own C signature and embed it in CyFunction.
After that, we (including Numba, Cython, and PyPy developers) can discuss
how portable C signature can be embedded in PyCCallDef.
> > I'm not sure adding such complicated protocol almost only for Cython.
> > If CyFunction can be implemented behind PEP 576, it may be better.
> I recall my post
> explaining the main difference between PEP 576 and PEP 580.
I wrote my mail after reading the mail, of course.
But it was unclear without reading PEP and implementation carefully.
For example, "hook which part" seems meta-discussion to me before
reading your implementation.
I think only way to understand PEP 580 is reading implementation
and imagine how Cython and Numba use it.
> I would like to stress that PEP 580 was designed for maximum
> performance, both today and for future extensions (such as calling with
> native C types).
I don't know what the word *stress* mean here. (Sorry, I'm not good at English
enough for such hard discussion).
But I want to see PoC of real benefit of PEP 580, as I said above.
> > * PEP 576 and 580 are not strictly mutually exclusive; PEP 576 may be
> > accepted in addition to PEP 580
> I don't think that this is a good idea: you will mostly end up with the
> disadvantages of both approaches.
Hm, My point was providing easy and simple way to support FASTCALL
in callable object like functools.partial or functools.lru_cache.
But it should be discussed after PEP 580.
INADA Naoki <songofacandy(a)gmail.com>