[Numpy-discussion] Custom dtypes without C -- or, a standard ndarray-like type

Travis Oliphant travis at continuum.io
Tue Sep 23 09:34:28 EDT 2014


On Sun, Sep 21, 2014 at 6:50 PM, Stephan Hoyer <shoyer at gmail.com> wrote:

> pandas has some hacks to support custom types of data for which numpy
> can't handle well enough or at all. Examples include datetime and
> Categorical [1], and others like GeoArray [2] that haven't make it into
> pandas yet.
>
> Most of these look like numpy arrays but with custom dtypes and type
> specific methods/properties. But clearly nobody is particularly excited
> about writing the the C necessary to implement custom dtypes [3]. Nor is do
> we need the ndarray ABI.
>
> In many cases, writing C may not actually even be necessary for
> performance reasons, e.g., categorical can be fast enough just by wrapping
> an integer ndarray for the internal storage and using vectorized
> operations. And even if it is necessary, I think we'd all rather write
> Cython than C.
>
> It's great for pandas to write its own ndarray-like wrappers (*not*
> subclasses) that work with pandas, but it's a shame that there isn't a
> standard interface like the ndarray to make these arrays useable for the
> rest of the scientific Python ecosystem. For example, pandas has loads of
> fixes for np.datetime64, but nobody seems to be up for porting them to
> numpy (I doubt it would be easy).
>
> I know these sort of concerns are not new, but I wish I had a sense of
> what the solution looks like. Is anyone actively working on these issues?
> Does the fix belong in numpy, pandas, blaze or a new project? I'd love to
> get a sense of where things stand and how I could help -- without writing
> any C :).
>
>
Hey Stephan,

There are not easy answers to your questions.   The reason is that NumPy's
dtype system is not extensible enough with its fixed set of "builtin"
data-types and its bolted-on "user-defined" datatypes.   The implementation
was adapted from the *descriptor* notion that was in Numeric (written
almost 20 years ago).     While a significant improvement over Numeric, the
dtype system in NumPy still has several limitations:

    1) it was not designed to add new fundamental data-types without
breaking the ABI (most of the ABI breakage between 1.3 and 1.7 due to the
addition of np.datetime has been pushed to a small corner but it is still
there).

    2) The user-defined data-type system which is present is not well
tested and likely incomplete:  it was the best I could come up with at the
time NumPy first came out with a bit of input from people like Fernando
Perez and Francesc Alted.

    3) It is far easier than in Numeric to add new data-types (that was a
big part of the effort of NumPy), but it is still not as easy as one would
like to add new data-types (either fundamental ones requiring recompilation
of NumPy or 'user-defined' data-types requiring C-code.

I believe this system has served us well, but it needs to be replaced
eventually.  I think it can be replaced fairly seamlessly in a largely
backward compatible way (though requiring re-compilation of dependencies).
   Fixing the dtype system is a fundamental effort behind several projects
we are working on at Continuum:  datashape, dynd, and numba.    These
projects are addressing fundamental limitations in a way that can lead to a
significantly improved framework for scientific and tabular computing in
Python.

In the mean-time, NumPy can continue to improve in small ways and in
orthogonal ways (like the new __numpy_ufunc__ mechanism which allows ufuncs
to work more seamlessly with different kinds of array-like objects).
 This kind of effort as well as the improved buffer protocol in Python,
mean that multiple array-like objects can co-exist and use each-other's
data.   Right now, I think that is the best current way to address the
data-type limitations of NumPy.

Another small project is possible today --- one could today use Numba or
Cython to generate user-defined data-types for existing NumPy.   That would
be an interesting project and would certainly help to understand the
limitations of the user-defined data-type framework without making people
write C-code.   You could use a meta-class and some code-generation
techniques so that by defining a particular class you end-up with a
user-defined data-type for NumPy.

Even while we have been addressing the fundamental limitations of NumPy
with our new tools at Continuum, replacing NumPy is a big undertaking
because of its large user-base.   While I personally think that NumPy could
be replaced for new users as early as next year with a combination of dynd
and numba, the big install base of NumPy means that many people (including
the company I work with, Continuum) will be supporting NumPy 1.X and Pandas
and the rest of the NumPy-Stack for many years to come.

So, even if you see me working and advocating new technology, that should
never be construed as somehow ignoring or abandoning the current technology
base.   I remain deeply interested in the success of the scientific
computing community --- even though I am not currently contributing a lot
of code directly myself.    As dynd and numba mature, I think it will be
clear to more people how to proceed.

For example, just recently the thought emerged that because dynd addresses
some of the major needs that Pandas has, it may be possible very soon for
dynd to replace NumPy as the foundational container for Pandas data-frames.
   Because Pandas use of the NumPy API is limited, this is an easier
undertaking than having dynd replace NumPy itself.   And given that the new
data-types of dynd:  missing-data, categorical types, variable-length
strings, etc. are some of the key areas that Pandas has work-arounds for,
it may be a straight-forward project.

For those not aware:  dynd is cython code that wraps the C++ library
libdynd.   Currently libdynd is not complete, so working on dynd may
require some improvements / fixes to libdynd.   However, the dynd layer
should be accessible to many people.   The libdynd layer is also fairly
straightforward C++.  I strongly believe that the combination of libdynd
and dynd is a much easier foundation to work on and maintain than the NumPy
code base.      I say this after having personally spent over a decade on
the Numeric code-base and then the NumPy code base.    The NumPy "C"
code-base has been improved since I left it by the excellent work of
several patient developers --- but it is not easy to transmit the knowledge
necessary to understand the code-base sufficient to maintain it without
creating backward compatibility issues.

So, while I continue to support the NumPy code base and its extensions
(personally, through Numfocus, and through Continuum) and believe it will
be relevant for many years, I also believe the future lies in renewing the
NumPy code base with a combination of dynd and numba with more emphasis on
the high-level APIs like pandas and blaze.  The good news is that this
means:  1) a lot more code in Python or Cython, 2) compatibility with the
PyPy world as part of a long term effort to heal the rift that exists
between scientific-use of Python and "web-use" of Python.

In the end, all of this is good news for Python and scientific computing.
More and better tools will continue to be written with better interop
between them.    There are many places to jump in and help:  dynd, libdynd,
datashape, blaze, numba, scipy, scikits, numpy, pandas, and even a new
project you create that enhances some aspect of any of these or does
something like use Cython or Numba to create NumPy user-defined data-types
from a Python class-specification.

I agree it can be hard to know where things will eventually end up and so
therefore where to spend your effort.   All I can tell you is what I've
decided and where I am pushing and promoting.

Best,

-Travis
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20140923/26092e83/attachment.html>


More information about the NumPy-Discussion mailing list