[Numpy-discussion] big-bangs versus incremental improvements (was: Re: SciPy 2014 BoF NumPy Participation)

Nathaniel Smith njs at pobox.com
Thu Jun 5 18:48:07 EDT 2014


On Thu, Jun 5, 2014 at 3:24 PM, David Cournapeau <cournape at gmail.com> wrote:
> On Thu, Jun 5, 2014 at 2:51 PM, Charles R Harris <charlesr.harris at gmail.com>
> wrote:
>> On Thu, Jun 5, 2014 at 6:40 AM, David Cournapeau <cournape at gmail.com>
>> wrote:
>>> IMO, what is needed the most is refactoring the internal to extract the
>>> Python C API low level from the rest of the code, as I think that's the main
>>> bottleneck to get more contributors (or get new core features more quickly).
>>>
>>
>> What do you mean by "extract the Python C API"?
>
> Poor choice of words: I meant extracting the lower level part of
> array/ufunc/etc... from its wrapping into the python C API (with the idea
> that the latter could be done in Cython, modulo improvements in cython to
> manage the binary/code size explosion).
>
> IOW, split numpy into core and core-py (I think dynd benefits a lots from
> that, on top of its feature set).

Can you give some examples of these benefits? I'm kinda wary of
refactoring-for-the-sake-of-it -- IME usually it's easier, more
valuable, and more fun to refactor in the process of making some
concrete improvement.

Also, it's very much pie-in-the-sky at the moment, but if the pypy or
numba or pyston compilers gained the ability to grok cython code
directly, then having everything in cython instead of C could
potentially allow for a single numpy code base to be shared between
cpython and jitted-python, with the former working as it does now and
the latter doing JIT loop fusion etc.

-n

-- 
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org



More information about the NumPy-Discussion mailing list