I propose to accept NEP-18, "A dispatch mechanism for NumPy’s high level
Since the last round of discussion, we added a new section on "Callable
objects generated at runtime" clarifying that to handle such objects is out
of scope for the initial proposal in the NEP.
If there are no substantive objections within 7 days from this email, then
the NEP will be accepted; see NEP 0 for more details.
+1 for keeping the same CoC as Scipy, making a new thing just seems a
bigger surface area to maintain. Personally I already assumed Scipy's
"honour[ing] diversity in..." did not imply any protection of
behaviours that violate the CoC *itself*, but if you wanted to be
really explicit you could add "to the extent that these do not
conflict with this code of conduct." to that line.
I'm creating a structured numpy array in C, wrapping it with a MaskedArray
and passing this to a Python script:
PyObject *arr = PyArray_NewFromDescr(&PyArray_Type, descr, 1,
&longCount, &pointSize, data, NPY_ARRAY_WRITEABLE, nullptr);
// Now wrap the created array in a MaskedArray.
PyObject *module = PyImport_ImportModule("numpy.ma");
PyObject *dict = PyModule_GetDict(module);
PyObject *keys = PyDict_Keys(dict);
PyObject *o = PyDict_GetItemString(dict, "MaskedArray");
PyObject *args = PyTuple_New(1);
PyTuple_SetItem(args, 0, arr);
m_directArray = PyObject_CallObject(o, args);
This works fine and the data is seen in python as it was in C. What I'd
like to happen is to have changes made to the array in Python be reflected
in the memory that was passed in. This seems to work in some cases. Given
[('X', '<f8'), ('Y', '<f8'), ('Z', '<f8'), ('OffsetTime', '<u4')]
I can do something like:
X = 10
in Python and appropriate data is set in the C-allocated memory. However,
X *= 10
in that it seems to make a copy of the array IF the mask of the values
aren't all set to 0 in the source.
Is there something I can do to get operations on the MaskedArray to operate
on the data in place as would occur with a raw ndarray?
I spent some time going through the polynomial directory and found several
locations where there were comments left suggestion potential improvements
down the road, TODOs, and a line of commented out code. I wanted to share
these here to gather a consensus on which, if any, of these tasks should be
done. If some of these should not be done, do we want to remove the
comments implying changes should be made, or kick the can down the road?
In `numpy/polynomial/_polybase.py, there is the following class method:
# TODO: we're stuck with disabling math formatting until we handle
# exponents in this function
Further down, in `_repr_latex_` there is this comprehension, where the
leading comment is no longer true, and the old conditional has been
# filter out uninteresting coefficients
filtered_coeffs = [
for i, c in enumerate(self.coef)
# if not (c == 0) # handle NaN
If this behavior is correct, shouldn’t we remove the leading comment and
the commented out line, or at least leave an explanation as to why it is
commented out and we wish it to remain.
Further down there is also a comment in the `__div__` function implying
there may have been the intention of future modification:
def __div__(self, other):
# set to __floordiv__, /, for now.
There is also a comment, “made more efficient by using powers of two in the
usual way” in the various `pow` functions specified below:
chebyshev.py, line 862
hermite_e.py, line 679
hermite.py, line 630
laguerre.py, line 627
legendre.py, line 661
polynomial.py, line 471
I am happy to make any or all of the enhancements, modify the
documentation, or review PRs. I appreciate advice or direction on any of
I just joined the numpy mailing list to suggest an enhancement of the docs about writing binding code
I hope this is the right place to discuss this.
So, this page is one of the first hits in Google when you search for "python bindings numpy". It is an important page for orientation. What I miss is a longer article about pybind11.
pybind11 is currently the best tool on the market to wrap C++ code to Python. This is my professional opinion. When you look at the facts, it is hard to disagree. Pybind11 is based on the approach of Boost.Python, but is a compact project that doesn't require Boost and is developed independently. If you use it in a Python package, you can add it as a requirement and pip will happily install it. It doesn't require you to learn a new language, the bindings are generated using C++ meta-programming techniques under the hood. pybind11 has outstanding documentation and is extremely popular on github (3800+ stars). Refcounting is done automatically, which is why I would even use it to wrap C code. Naturally, it has excellent support for numpy arrays. Pybind11 is FOSS (BSD-style license).
I have used Cython, Boost.Python, SWIG, and pybind11 in small to large projects, and pybind11 is by far the most pleasant and the most powerful. You can do really sophisticated things in pybind11, which I cannot imagine doing with other binding tools, and most importantly, it never chokes over your C++ code. Cython and SWIG both have trouble with certain C++ idioms, which is not surprising because C++ is notoriously difficult to parse and these tools were primarily developed to wrap C (which is much easier to parse). For C++, it is much better to not add a custom parser to the toolchain and just let the C++ compiler generate the low-level binding code. This is what pybind11 does.
So far all these reasons and more, it should be mentioned and even highlighted here:
I am happy to write a section about it. Disclaimer: I am not at all affiliated with the pybind11 developers, just a thankful user.