Call For Participants For The 2016 Python Language Summit
It's that time once again: time to start planning for the 2016 Python Language Summit! This year the summit will be at the Oregon Convention Center in Portland, Oregon, USA, on May 28th. Sadly, again this year Michael Foord won't be in attendance. Barry Warsaw and I are running the summit for the second time.
The purpose of the event is to disseminate information and spark conversation among Python core developers. It's our once-a-year chance to get together and hash out where we're going and what we're doing, face-to-face.
We're making two minor changes this year. First: we're going to experiment with lightning talks! We may have a bunch at the end, or we may throw some in between longer presentations--not sure yet, we'll see how it goes. In the grand tradition of lightning talks, they'll be scheduled exclusively on the day of the summit. We'll provide a whiteboard or other drawable surface in case you don't show up with slides, and wild gesticulation isn't enough.
Second: we're using a Google Form to collect signups. This one form lets you request an invitation to the summit, and also optionally propose a talk. Please note: filling out the form does not guarantee you an invitation. Space is limited; if you're a core developer, your request for invitation *will* be honored, but we may need to restrict attendance for others. (Sorry!) Barry and I will email the invitations separately.
Signups are open as of now, and will remain open for six weeks, closing April 12th. But it'll only take you a minute to fill out the form, so you might as well do it right now! Signing up sooner will make our lives easier, too.
You'll find a link to the signup form on the summit's official web page, here:
https://us.pycon.org/2016/events/langsummit/
One final note. Again this year we're inviting Jake Edge from Linux Weekly News to attend the summit and provide press coverage. In case you missed it, Jake did a phenomenal job of covering last year's summit, giving the reader a very thorough overview of what happened.
https://lwn.net/Articles/639773/
Some attendees were worried last year about sharing private or proprietary information in front of a reporter. Jake, Barry, and I want to assure you that it's just not a problem. Jake's not there to embarrass anybody or get anybody in trouble. He said he'd be happy to work with any attendees about any discussions you want considered "off the record".
We hope to see you at the summit!
[BL]arry
I'll be there. Unless it requires us both getting a PyCon talk accepted. In which case I will demur to more wizened people than we.
On Tue, Mar 1, 2016 at 8:01 PM, Larry Hastings <larry@hastings.org> wrote:
It's that time once again: time to start planning for the 2016 Python Language Summit! This year the summit will be at the Oregon Convention Center in Portland, Oregon, USA, on May 28th. Sadly, again this year Michael Foord won't be in attendance. Barry Warsaw and I are running the summit for the second time.
The purpose of the event is to disseminate information and spark conversation among Python core developers. It's our once-a-year chance to get together and hash out where we're going and what we're doing, face-to-face.
We're making two minor changes this year. First: we're going to experiment with lightning talks! We may have a bunch at the end, or we may throw some in between longer presentations--not sure yet, we'll see how it goes. In the grand tradition of lightning talks, they'll be scheduled exclusively on the day of the summit. We'll provide a whiteboard or other drawable surface in case you don't show up with slides, and wild gesticulation isn't enough.
Second: we're using a Google Form to collect signups. This one form lets you request an invitation to the summit, and also optionally propose a talk. Please note: filling out the form does not guarantee you an invitation. Space is limited; if you're a core developer, your request for invitation *will* be honored, but we may need to restrict attendance for others. (Sorry!) Barry and I will email the invitations separately.
Signups are open as of now, and will remain open for six weeks, closing April 12th. But it'll only take you a minute to fill out the form, so you might as well do it right now! Signing up sooner will make our lives easier, too.
You'll find a link to the signup form on the summit's official web page, here:
https://us.pycon.org/2016/events/langsummit/
One final note. Again this year we're inviting Jake Edge from Linux Weekly News to attend the summit and provide press coverage. In case you missed it, Jake did a phenomenal job of covering last year's summit, giving the reader a very thorough overview of what happened.
https://lwn.net/Articles/639773/
Some attendees were worried last year about sharing private or proprietary information in front of a reporter. Jake, Barry, and I want to assure you that it's just not a problem. Jake's not there to embarrass anybody or get anybody in trouble. He said he'd be happy to work with any attendees about any discussions you want considered "off the record".
We hope to see you at the summit!
[BL]arry
python-committers mailing list python-committers@python.org https://mail.python.org/mailman/listinfo/python-committers Code of Conduct: https://www.python.org/psf/codeofconduct/
On Tue, Mar 1, 2016 at 6:01 PM, Larry Hastings <larry@hastings.org> wrote:
It's that time once again: time to start planning for the 2016 Python Language Summit! This year the summit will be at the Oregon Convention Center in Portland, Oregon, USA, on May 28th.
Thanks for chairing this again!
Sadly, again this year Michael Foord won't be in attendance.
:(
Second: we're using a Google Form to collect signups. This one form lets you request an invitation to the summit, and also optionally propose a talk.
In case folks are taking requests, I'd love to hear about:
- status report on core workflow improvements
- how typing has been received and what's next (e.g. more integration into the compiler, multiple dispatch)
- Python in the embedded/-ish space (e.g. MicroPython, BBC MicroBit, RaspberryPi, android, iOS, ARM)
- status of alternate implementations and tool chains:
- pyjion
- pyston
- pypy
- jython
- ironpython
- cython
- numba
- others?
FWIW, I've offered to present the following (as a last resort <wink>):
- about C OrderedDict
- about my Multi-core Python project
- the successor to PEP 406 ("Improved Encapsulation of Import State")
-eric
On Wed, 2 Mar 2016 at 08:27 Eric Snow <ericsnowcurrently@gmail.com> wrote:
On Tue, Mar 1, 2016 at 6:01 PM, Larry Hastings <larry@hastings.org> wrote:
It's that time once again: time to start planning for the 2016 Python Language Summit! This year the summit will be at the Oregon Convention Center in Portland, Oregon, USA, on May 28th.
Thanks for chairing this again!
Sadly, again this year Michael Foord won't be in attendance.
:(
Second: we're using a Google Form to collect signups. This one form lets you request an invitation to the summit, and also optionally propose a talk.
In case folks are taking requests, I'd love to hear about:
- status report on core workflow improvements
- how typing has been received and what's next (e.g. more integration into the compiler, multiple dispatch)
- Python in the embedded/-ish space (e.g. MicroPython, BBC MicroBit, RaspberryPi, android, iOS, ARM)
- status of alternate implementations and tool chains:
- pyjion
- pyston
- pypy
- jython
- ironpython
- cython
- numba
- others?
I've put in for a 20 minute slot request for Pyjion to give the team a mini version of our actual Python talk to find out if we even have a chance of getting the C API changes we want merged in. I have no issue talking about the GitHub transition or about my push for the CoC applying everywhere, but I'm not sure if (B|L)arry want to give me so many talk slots (I suspect the CoC bit can be a lightning talk).
-Brett
FWIW, I've offered to present the following (as a last resort <wink>):
- about C OrderedDict
- about my Multi-core Python project
- the successor to PEP 406 ("Improved Encapsulation of Import State")
-eric
python-committers mailing list python-committers@python.org https://mail.python.org/mailman/listinfo/python-committers Code of Conduct: https://www.python.org/psf/codeofconduct/
On 2 March 2016 at 11:01, Larry Hastings <larry@hastings.org> wrote:
It's that time once again: time to start planning for the 2016 Python Language Summit!
Huzzah, thanks for organising this again!
I've forwarded the email to a few folks to suggest they submit presentation proposals, but I also have a question for everyone else: would folks be interested in a summary of the SSL/TLS handling developments over the past couple of years and open issues (aka "things that are still hard that we would prefer were simpler") we could potentially help with in core dev?
I don't think there's anything in particular pending that can't be handled just via the mailing lists and issue tracker, so this would be more a question of whether or not folks that haven't been following it closely would like to learn more about the *why* of it all. (Or, equivalently, "What do we know about network security management now that we didn't know back when the ssl module was added to Python 2.6?")
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 2016-03-03 08:45, Nick Coghlan wrote:
On 2 March 2016 at 11:01, Larry Hastings <larry@hastings.org> wrote:
It's that time once again: time to start planning for the 2016 Python Language Summit!
Huzzah, thanks for organising this again!
I've forwarded the email to a few folks to suggest they submit presentation proposals, but I also have a question for everyone else: would folks be interested in a summary of the SSL/TLS handling developments over the past couple of years and open issues (aka "things that are still hard that we would prefer were simpler") we could potentially help with in core dev?
Thanks! TLS/SSL is already covered. :) I have invited Cory Benfield (python-requests, urllib3, hyper). Cory and I are co-chairing a presentation about the future of TLS/SSL in Python core and Python ecosystem together. Let's hope 20 minutes are enough.
I have also proposed a short recap of Python Security, PSRT and Coverity Scan activity in the past year. I also like to address communications of security fixes. From the bug tracker it is not immediately visible, which Python releases contains a fix. The changelog doesn't highlight security fixes, too. This allowed one nasty bug to fly under the radar and caused a downstream $VENDOR to not backport a fix. I'd like to have security issues marked in the changelog, e.g. with "[S]" or "[SECURITY]" prefix/suffix.
Christian
2016-03-02 2:01 GMT+01:00 Larry Hastings <larry@hastings.org>:
The purpose of the event is to disseminate information and spark conversation among Python core developers. It's our once-a-year chance to get together and hash out where we're going and what we're doing, face-to-face.
Sadly, I don't plan to attend Pycon US 2016 (one of the reasons is that my talk on FAT Python was not accepted, but it's not the only reason).
But it would be nice if we can open a discussion on the Python C API. I understood that it's a major blocker issue for PyPy. It's probably also a major issue for IronPython and Jython. Since Pyston & Pyjion are based on CPython, it may impact them less, but it's probably still an annoyance to reach *best* performances.
I would be nice to discuss how to move to a new C API which doesn't expose implementation details and discuss if libraries will move to it or not. Implementation "details": GIL, reference counting, C structures like PyObject, etc.
Sorry, I have no idea how to "abstract" Python objects in such C API. I have no idea how to design such API.
But I know well that the current C API is too wide, it's almost a trash where we put everything. There is no clear separation between functions strictly written to only be used internally, and functions which are ok to be used outside the CPython code base.
I know that we have a "_Py" prefix for some symbols, but it's more the exception than the rule. We inherited a giant C API for the old days of CPython.
The latest major enhancement was the addition of a stable ABI, but the usage of this API is unclear to me. I'm quite sure that we don't build our own C extensions of the stdlib using the stable ABI. I also heard that the stable ABI was broken more than once... So I don't think that we can say that it's a success...
Sadly, we may conclude that it's not possible to replace the C API, or that yet another API will not be widely used. Especially if the new API is only supported by the latest version of Python! A requirement is probably to be compatible with Python 2.7 and 3.4.
If someone has a more concrete vision of what should be done for the "Python C API", please organize a session at the Language Summit and/or open a thread on python-ideas or python-dev.
Victor
On 3 March 2016 at 21:26, Victor Stinner <victor.stinner@gmail.com> wrote:
2016-03-02 2:01 GMT+01:00 Larry Hastings <larry@hastings.org>:
The purpose of the event is to disseminate information and spark conversation among Python core developers. It's our once-a-year chance to get together and hash out where we're going and what we're doing, face-to-face.
Sadly, I don't plan to attend Pycon US 2016 (one of the reasons is that my talk on FAT Python was not accepted, but it's not the only reason).
But it would be nice if we can open a discussion on the Python C API. I understood that it's a major blocker issue for PyPy. It's probably also a major issue for IronPython and Jython. Since Pyston & Pyjion are based on CPython, it may impact them less, but it's probably still an annoyance to reach *best* performances.
I would be nice to discuss how to move to a new C API which doesn't expose implementation details and discuss if libraries will move to it or not. Implementation "details": GIL, reference counting, C structures like PyObject, etc.
Adding cffi (including its dependencies) to the standard library was approved-in-principle a couple of years ago, and I believe the one technical issue with a lack of support for ahead-of-time compilation of the extension module has since been addressed, so as far as I know that just needs a champion to actually work through the details of getting it added via the PEP process.
I'm also not aware of any explicit documentation of the underlying FFI from a C API/ABI perspective, which is what would be needed for tools like SWIG and Cython to support it as an alternative to the full CPython API.
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
2016-03-03 13:40 GMT+01:00 Nick Coghlan <ncoghlan@gmail.com>:
Adding cffi (including its dependencies) to the standard library was approved-in-principle a couple of years ago, and I believe the one technical issue with a lack of support for ahead-of-time compilation of the extension module has since been addressed, (...)
Hum, I also recall vaguely a discussion about pycparser dependency of cffi.
Victor
On Thu, 3 Mar 2016 at 06:26 Victor Stinner <victor.stinner@gmail.com> wrote:
2016-03-03 13:40 GMT+01:00 Nick Coghlan <ncoghlan@gmail.com>:
Adding cffi (including its dependencies) to the standard library was approved-in-principle a couple of years ago, and I believe the one technical issue with a lack of support for ahead-of-time compilation of the extension module has since been addressed, (...)
Hum, I also recall vaguely a discussion about pycparser dependency of cffi.
Yep, the issue was pycparser which depended on PLY. At the time Alex Gaynor said he and David Beazley were planning to fix a bunch of things in PLY and that it would be best to hold off until PLY was cleaned up enough to go into the stdlib. That obviously didn't happen. :)
Le 03/03/2016 13:40, Nick Coghlan a écrit :
I would be nice to discuss how to move to a new C API which doesn't expose implementation details and discuss if libraries will move to it or not. Implementation "details": GIL, reference counting, C structures like PyObject, etc.
Adding cffi (including its dependencies) to the standard library was approved-in-principle a couple of years ago, and I believe the one technical issue with a lack of support for ahead-of-time compilation of the extension module has since been addressed, so as far as I know that just needs a champion to actually work through the details of getting it added via the PEP process.
I'm also not aware of any explicit documentation of the underlying FFI from a C API/ABI perspective, which is what would be needed for tools like SWIG and Cython to support it as an alternative to the full CPython API.
I don't understand what cffi has to do with the CPython API. You use cffi for binding with third-party libraries. C code wanting to interface with CPython will continue to have to use the CPython API.
As for integrating cffi into the stdlib, it seems to be doing fine as a third-party library.
Regards
Antoine.
On Thu, 3 Mar 2016 at 06:31 Antoine Pitrou <antoine@python.org> wrote:
Le 03/03/2016 13:40, Nick Coghlan a écrit :
I would be nice to discuss how to move to a new C API which doesn't expose implementation details and discuss if libraries will move to it or not. Implementation "details": GIL, reference counting, C structures like PyObject, etc.
Adding cffi (including its dependencies) to the standard library was approved-in-principle a couple of years ago, and I believe the one technical issue with a lack of support for ahead-of-time compilation of the extension module has since been addressed, so as far as I know that just needs a champion to actually work through the details of getting it added via the PEP process.
I'm also not aware of any explicit documentation of the underlying FFI from a C API/ABI perspective, which is what would be needed for tools like SWIG and Cython to support it as an alternative to the full CPython API.
I don't understand what cffi has to do with the CPython API. You use cffi for binding with third-party libraries. C code wanting to interface with CPython will continue to have to use the CPython API.
You're right, it doesn't directly address the needs of integrators like Blender who embed CPython as a scripting language. And this doesn't directly address the needs of Numba where you want low-level access to (I assume) the code object.
But I do think the spirit of Victor's idea is worth considering. Our C API does expose *a lot* of low-level detail that isn't necessarily needed and can hamper our ability to tweak how Python itself operates. We also have not been good about deprecating parts of the C API and thus getting rid of old ways of doing things that no longer map well on to how Python operates now (the import APIs are an example of this; I mean how many ways do we really need to expose for importing Python code?).
So it might behoove us to stop and look at what exactly is needed by embedders like Blender, library wrappers like NumPy or the various DB drivers, and accelerators like Numba. That way we can maybe start discussing things like all external code must go through a function/macro, PyObject is to be considered opaque to third-parties, and all public APIs have an error return value. And we discuss how to handle deprecations. And we see if there's a way to do memory management where INCREF/DECREF is no longer the direct responsibility of C API users. I mean if you want a motivation/lens to look at this through, then what would we need to do to our C API to make it so that anyone following a new API wouldn't be broken if we dropped the GIL?
As for integrating cffi into the stdlib, it seems to be doing fine as a third-party library.
It is doing fine as an external library, but if something as radical as heavily trimming back and/or rewriting the C API occurs then having CFFI in the stdlib to evolve with the internal C changes makes sense. I think that's where the thought of pulling CFFI in the stdlib stems from.
On Thu, Mar 3, 2016 at 10:39 AM, Brett Cannon <brett@python.org> wrote:
But I do think the spirit of Victor's idea is worth considering.
+1
...what would we need to do to our C API to make it so that anyone following a new API wouldn't be broken if we dropped the GIL?
If I recall correctly, this was one key topic that Larry discussed at the language summit latest year.
As for integrating cffi into the stdlib, it seems to be doing fine as a third-party library.
It is doing fine as an external library, but if something as radical as heavily trimming back and/or rewriting the C API occurs then having CFFI in the stdlib to evolve with the internal C changes makes sense. I think that's where the thought of pulling CFFI in the stdlib stems from.
At least part of the motivation was to deprecate/remove ctypes and replace it in the stdlib with CFFI.
-eric
Le 03/03/2016 18:58, Eric Snow a écrit :
It is doing fine as an external library, but if something as radical as heavily trimming back and/or rewriting the C API occurs then having CFFI in the stdlib to evolve with the internal C changes makes sense. I think that's where the thought of pulling CFFI in the stdlib stems from.
At least part of the motivation was to deprecate/remove ctypes and replace it in the stdlib with CFFI.
Why would that be desirable again? ctypes works perfectly fine and cffi isn't better for its core use case (runtime binding of C libraries).
Regards
Antoine.
On Thu, 3 Mar 2016 at 10:04 Antoine Pitrou <antoine@python.org> wrote:
It is doing fine as an external library, but if something as radical as heavily trimming back and/or rewriting the C API occurs then having CFFI in the stdlib to evolve with the internal C changes makes sense. I think
Le 03/03/2016 18:58, Eric Snow a écrit : that's
where the thought of pulling CFFI in the stdlib stems from.
At least part of the motivation was to deprecate/remove ctypes and replace it in the stdlib with CFFI.
Why would that be desirable again? ctypes works perfectly fine and cffi isn't better for its core use case (runtime binding of C libraries).
Ignoring the potential to crash the interpreter (I personally don't care but some do), is the maintenance issue we have with ctypes (or at least that's my hang-up with it). I think we still have not figured out what code we have patched and so no one has been willing to update to a newer version of libffi. I think Zachary looked into it and got some distance but never far enough to feel comfortable with trying to update things.
But I thought CFFI's ABI in-line solution matched what ctypes did?
Le 03/03/2016 19:38, Brett Cannon a écrit :
Ignoring the potential to crash the interpreter (I personally don't care but some do), is the maintenance issue we have with ctypes (or at least that's my hang-up with it). I think we still have not figured out what code we have patched and so no one has been willing to update to a newer version of libffi. I think Zachary looked into it and got some distance but never far enough to feel comfortable with trying to update things.
But I thought CFFI's ABI in-line solution matched what ctypes did?
I think it does more or less, which is why precisely I would find it gratuitous to deprecate ctypes.
As for the maintenance problem, ok, but we might end up with the same problems with cffi (both rely on libffi after all).
Regards
Antoine.
On Thu, 3 Mar 2016 at 10:40 Antoine Pitrou <antoine@python.org> wrote:
Le 03/03/2016 19:38, Brett Cannon a écrit :
Ignoring the potential to crash the interpreter (I personally don't care but some do), is the maintenance issue we have with ctypes (or at least that's my hang-up with it). I think we still have not figured out what code we have patched and so no one has been willing to update to a newer version of libffi. I think Zachary looked into it and got some distance but never far enough to feel comfortable with trying to update things.
But I thought CFFI's ABI in-line solution matched what ctypes did?
I think it does more or less, which is why precisely I would find it gratuitous to deprecate ctypes.
As for the maintenance problem, ok, but we might end up with the same problems with cffi (both rely on libffi after all).
Personally, if I got my way we would deprecate ctypes in the stdlib and give it to the community to maintain (but in situations like this I rarely get my way :). We would then keep CFFI externally and just make sure that any new C API we developed makes sense for use by CFFI.
And another idea I had for some new-fangled C API: no macros. That gives us a better ABI and it also makes AST analysis easier with tools like clang-analyzer.
p
On 03/03/2016 09:58 AM, Eric Snow wrote:
...what would we need to do to our C API to make it so that anyone following a new API wouldn't be broken if we dropped the GIL? If I recall correctly, this was one key topic that Larry discussed at
On Thu, Mar 3, 2016 at 10:39 AM, Brett Cannon <brett@python.org> wrote: the language summit latest year.
Kinda, yeah. Certainly it's a topic I've thought a lot about.
Consider this. With almost no exceptions*, none of the popular new languages have a C API. Instead they'll have a foreign function interface allowing you to call C from inside the language. So you don't write extensions in C and call into the language, you write your extensions natively in the language and call out.
One advantage of this technique is that it allows most implementation details of the language to remain hidden. CPython can't drop reference counting and move solely to tracing garbage collection, because the C API lets external callers deal with Python objects, which means Python's internal object lifetime management approach must be visible, which means it's implicitly part of the API. In short, if we change from reference counting to tracing GC, we break every C extension in existence, kablooey, oblivion.
But! If there were no C extensions--if all Python programs talked to
native libraries through FFIs like ctypes and cffi--then this would be a
private implementation detail and we could iterate however we liked on
object lifetime management. I've asked Armin Rigo about PyPy here.
Pardon me if my memory is faulty, but what I think he said was this:
they started with GC, then went to generational GC, then went to
incremental generational GC. If they'd had a C API, going to
generational probably wouldn't have broken all their extensions, but
going to incremental absolutely would have. Since PyPy doesn't have a C
API, naturally they can change it all they like.
If we could wave a magic wand and get all extension authors to switch to writing their extensions in Python and using cffi, we should absolutely do it. That'd be great for cross-implementation compatibility; your extension would (hopefully) run unchanged in CPython and PyPy today, and I heard a rumor that Jython and IronPython want to support cffi too, so hey! someday it might run unchanged in those too. This would also make it possible for CPython to declare that the C API was dead and free us up to make some radical but welcome changes to CPython's innards. Unfortunately, we don't have such a magic wand, and I don't think there's any workable path to convince extension authors to switch en masse. And if we're stuck with the C API, we're stuck with a lot of the implementation details that are baked into it.
I'm hoping to present on this subject at this year's summit. I hope all the interested core devs can make it!
//arry/
- The only exception I know of is Lua--are there more?
Larry Hastings <larry <at> hastings.org> writes:
If we could wave a magic wand and get all extension authors to switch to writing their extensions in Python and using cffi, we should absolutely do it.
CFFI is slow. This would effectively kill one of the strongholds of CPython. IMO CPython's C-API is the best out there and a large part of Python's success.
We're talking about a slowdown of at least an order of magnitude here:
https://mail.python.org/pipermail/python-dev/2013-December/130772.html
I think people who don't need the C-API can use PyPy.
Stefan Krah
Stefan Krah <stefan <at> bytereef.org> writes:
We're talking about a slowdown of at least an order of magnitude here:
https://mail.python.org/pipermail/python-dev/2013-December/130772.html
I think people who don't need the C-API can use PyPy.
Or, of course, use CPython with Numba, which handles an ever increasing amount of traditional bottlenecks: For example, it is possible to write a "native speed" transpose function for NumPy arrays in plain Python.
Stefan Krah
I guess I have two responses to that.
I don't know what it is about cffi that makes it slow. Perhaps it could be improved? If it got a lot of traction, maybe it'd get more eyes looking at it?
How important is this speed difference? I suppose the answer, as always, is "it depends". It depends on how often you call the C library, and how long you spend in the routine when you get there.
Certainly a benchmark for library X is a worst-case scenario; in real-world code, for most libraries, perhaps the performance of the glue code isn't crucial.
I always feel a little funny when people talk about performance in
Python. Not that I believe performant Python isn't possible or
desirable--just that, if you're writing your code in Python, you've
already implicitly conceded that performance is not your top priority.
The design of Python the language, and of CPython the interpreter, is
necessarily a series of tradeoffs, and it's not like those tradeoffs are
always made in favor of performance.
Plus, this change itself would be such a tradeoff. We'd (likely) be giving up performance of glue code for C libraries, and in return for this we could finally perform the brain surgery on CPython that we're all dying to do. It's reasonable to suggest that such radical changes to CPython might "pay" for the loss of performance in the glue code.
Of course this is all academic. I absolutely don't think we can get rid of the C API, or even modify it in any meaningful way that would let us abstract away implementation details like reference counting. As I said in my original email, this magic wand simply doesn't exist.
//arry/
On 03/05/2016 12:42 AM, Stefan Krah wrote:
Larry Hastings <larry <at> hastings.org> writes:
If we could wave a magic wand and get all extension authors to switch to writing their extensions in Python and using cffi, we should absolutely do it.
CFFI is slow. This would effectively kill one of the strongholds of CPython. IMO CPython's C-API is the best out there and a large part of Python's success.
We're talking about a slowdown of at least an order of magnitude here:
https://mail.python.org/pipermail/python-dev/2013-December/130772.html
I think people who don't need the C-API can use PyPy.
Stefan Krah
python-committers mailing list python-committers@python.org https://mail.python.org/mailman/listinfo/python-committers Code of Conduct: https://www.python.org/psf/codeofconduct/
Larry Hastings <larry <at> hastings.org> writes:
2. How important is this speed difference? I suppose the answer, as always, is "it depends". It depends on how often you call the C library, and how long you spend in the routine when you get there. Certainly a benchmark for library X is a worst-case scenario; in real-world code, for most libraries, perhaps the performance of the glue code isn't crucial.
Several of the early adopters of library X were the sqlalchemy people, who absolutely had real-world issues.
Of course this is all academic. I absolutely don't think we can get rid of the C API, or even modify it in any meaningful way that would let us abstract away implementation details like reference counting. As I said in my original email, this magic wand simply doesn't exist./arry
Sorry for misquoting, indeed you said that. I was a little concerned that CFFI was mentioned by several people as a solution and wanted to highlight the drawbacks.
Stefan Krah
On 3/5/2016 4:31 AM, Larry Hastings wrote:
- How important is this speed difference?
I believe Pygame originally used SWIG or something similar to wrap the underlying C SDL library. When a ctypes version was tried, it was much slower, so slow that they stayed with the original wrapping. I don't know what they are doing now.
I suppose the answer, as always, is "it depends". It depends on how often you call the C library, and how long you spend in the routine when you get there. Certainly a benchmark for library X is a worst-case scenario; in real-world code, for most libraries, perhaps the performance of the glue code isn't crucial.
I always feel a little funny when people talk about performance in Python. Not that I believe performant Python isn't possible or desirable--just that, if you're writing your code in Python, you've already implicitly conceded that performance is not your top priority.
One reason for wrapping 3rd party C code is because reasonable performance *is* a priority in some situations.
The design of Python the language, and of CPython the interpreter, is necessarily a series of tradeoffs, and it's not like those tradeoffs are always made in favor of performance.
Part of the design of CPython, and what makes its relative slowness more tolerable, is the possibility to convert bottleneck Python code to C. We even do that within the stdlib. Currently, those conversions are sufficient for me, and I have no need to do any conversions myself. But I like knowing that I have to option to trade personal effort for better performance should I need it. If I had to go that route, I would first try Cython. So I would not much care what happened to the C API as long as Cython was not (permanently) disabled.
tjr
Le 05/03/2016 10:31, Larry Hastings a écrit :
I always feel a little funny when people talk about performance in Python. Not that I believe performant Python isn't possible or desirable--just that, if you're writing your code in Python, you've already implicitly conceded that performance is not your top priority. The design of Python the language, and of CPython the interpreter, is necessarily a series of tradeoffs, and it's not like those tradeoffs are always made in favor of performance.
Agreed. However, if the kind of performance problem you have is the kind where you have a couple well-known critical paths, it is possible to speed that up significantly using either raw C, or Cython, or even Numba in some cases as Stefan mentions. Not to mention of course any third-party library that might already have done the work for you (in the field of scientific computing, there are many of them).
For the other kind of Python performance problem, where the slowness comes from a long convoluted critical path crossing a lot of high-level interfaces, then PyPy is currently king and I expect it to remain it for a long time.
Plus, this change itself would be such a tradeoff. We'd (likely) be giving up performance of glue code for C libraries, and in return for this we could finally perform the brain surgery on CPython that we're all dying to do. It's reasonable to suggest that such radical changes to CPython might "pay" for the loss of performance in the glue code.
This is all overlooking the fact that the C API isn't merely used for low-level binding to third-party C libraries (something which Cython allows you to do without writing C code, btw, and AFAIU Cython kindof has a PyPy backend?). The C API allows you to write extension types, or accces interpreter structures for other purposes. Those uses aren't catered for by cffi, by design.
Regards
Antoine.
On 5 March 2016 at 04:25, Larry Hastings <larry@hastings.org> wrote:
- The only exception I know of is Lua--are there more?
TCL and Racket (was mzscheme). I think the key thing is that languages designed for embedding provide a C API. Python supports embedding, so if we did move away from the C API, we'd need to be very careful not to make it harder to embed Python (or deprecate embedding altogether, but I don't think that's realistic - quite a few projects embed Python).
Paul
On 5 March 2016 at 19:49, Paul Moore <p.f.moore@gmail.com> wrote:
On 5 March 2016 at 04:25, Larry Hastings <larry@hastings.org> wrote:
- The only exception I know of is Lua--are there more?
TCL and Racket (was mzscheme). I think the key thing is that languages designed for embedding provide a C API. Python supports embedding, so if we did move away from the C API, we'd need to be very careful not to make it harder to embed Python (or deprecate embedding altogether, but I don't think that's realistic - quite a few projects embed Python).
I actually want to make embedding CPython even easier (ideally ending up in a situation where we can offer a shared embedding API with MicroPython), but that's a time consuming task that requires a pretty deep knowledge of CPython's startup and shutdown sequences.
I'm pretty happy with the general design and proposed implementation strategy in PEP 432 now, but it's sufficiently far removed from anything I'm doing for work that it isn't a project I can pick and work on for a few hours here and there. (Which also creates problems for coaching anyone else in tackling it - this project touches parts of the interpreter that *I* have to relearn in order to work on it effectively, and it's mainly the relearning that's the time consuming part rather than the actual work)
That said, there's already some pretty interesting work in embedding based cross-runtime bridges (metabiosis for CPython-in-PyPy, jitpy for PyPy-in-CPython, Lunatic Python for both Lua-in-CPython and CPython-in-Lua, Julia's native bidirectional Python bridge), so I suspect development energy around "Python without the CPython C API" could be directed towards some of those combinations, rather than trying to significantly refactor CPython itself.
Regards, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
2016-03-03 18:39 GMT+01:00 Brett Cannon <brett@python.org>:
But I do think the spirit of Victor's idea is worth considering.
Oh, another note about such theorical API. For CPython, it would be nice to experiment to implement such new API *on top* of the existing Python C API. And it should be a third party project to get more contributors, faster feedback, etc. As I wrote, supporting Python 2.7 and 3.4 with the same API is important to ensure that the API is widely used.
Maybe I should start to collect ideas somewhere...
Victor
As I mentioned in the original announcement, we'll close Language Summit invite signups next Tuesday, April 12th. Signing up in advance makes planning much easier on us Language Summit organizers, and signing up through our web form makes it easiest on everybody. If you plan to attend, please fill out the form in the next few days! I promise it's short, clear, and easy. It'll take you only a few minutes, honest.
And remember: if you're a core developer, your invitation is guaranteed,
[BL]arry
On 03/01/2016 05:01 PM, Larry Hastings wrote:
Signups are open as of now, and will remain open for six weeks, closing April 12th. But it'll only take you a minute to fill out the form, so you might as well do it right now! Signing up sooner will make our lives easier, too.
You'll find a link to the signup form on the summit's official web page, here:
https://us.pycon.org/2016/events/langsummit/
Please sign up today if you want to attend. We want to start planning the schedule and other details and it really helps to know who's coming and what they want to talk about as early as possible.
https://us.pycon.org/2016/events/langsummit/
Thanks,
[BL]arry
participants (11)
-
Antoine Pitrou
-
Brett Cannon
-
Christian Heimes
-
Eric Snow
-
Jack Diederich
-
Larry Hastings
-
Nick Coghlan
-
Paul Moore
-
Stefan Krah
-
Terry Reedy
-
Victor Stinner