Hi all,
from the discussion, I was thinking maybe something like this:
class B():
def __numpy_getitem__(self, index, indexing_method="plain"):
# do magic.
return super().__numpy_getitem__(
index, indexing_method=indexing_method)
as new API. There are some issues, though. An old subclass may define
`__getitem__`. Now the behaviour that would seem nice to me is:
1. No new attribute (no `__numpy_getitem__`) and also no
`__getitem__`/`__setitem__`: Should just work
2. No new attribute but old attributes defined: Should at
least give a warning (or an error) when using the new
attributes, since the behaviour might buggy.
3. `__numpy_getitem__` defined: Will channel all indexing through it
(except maybe some edge cases in python 2). Best, also avoid that
use getitem in setitem trick.... If you define both (which might
make sense for some edge case stuff), you should just channel it
through this yourself.
Now the issue I have is that for 1. and 2. to work correctly, I need to
know which methods are overloaded by the subclass. Checking is a bit
tedious and the method I hacked first for getitem and setitem does not
work for a normal method.
Can anyone think of a nicer way to do this trick that does not require
quite as much hackery. Or is there an easy way to do the overloading
check?
- Sebastian

Hi All,
I would like to release Numpy 1.11.2rc1 this weekend. It will contain a few
small fixes and enhancements for windows and the last Scipy release. If
there are any pending PRs that you think should go in or be backported for
this release, please speak up.
Chuck

Hi all,
What is the official spelling of NumPy/Numpy/numpy?
The documentation is not consistent and it mixes both NumPy and Numpy. For example, the reference manual uses both spellings in the introduction paragraph (http://docs.scipy.org/doc/numpy/reference/):
"This reference manual details functions, modules, and objects included in Numpy, describing what they are and what they do. For learning how to use NumPy, see also NumPy User Guide."
However, in all docs taken together "NumPy" is most frequently used (74%):
% find . -name "*.rst" -exec grep Numpy -ow {} \; | wc -l
161
% find . -name "*.rst" -exec grep NumPy -ow {} \; | wc -l
471
I also reported it as an issue: https://github.com/numpy/numpy/issues/7986
Yours,
Bartosz

> Date: Wed, 31 Aug 2016 13:28:21 +0200
> From: Michael Bieri <mibieri(a)gmail.com>
>
> I'm not quite sure which approach is state-of-the-art as of 2016. How would
> you do it if you had to make a C/C++ library available in Python right now?
>
> In my case, I have a C library with some scientific functions on matrices
> and vectors. You will typically call a few functions to configure the
> computation, then hand over some pointers to existing buffers containing
> vector data, then start the computation, and finally read back the data.
> The library also can use MPI to parallelize.
>
Depending on how minimal and universal you want to keep things, I use
the ctypes approach quite often, i.e. treat your numpy inputs an
outputs as arrays of doubles etc using the ndpointer(...) syntax. I
find it works well if you have a small number of well-defined
functions (not too many options) which are numerically very heavy.
With this approach I usually wrap each method in python to check the
inputs for contiguity, pass in the sizes etc. and allocate the numpy
array for the result.
Peter

Hi,
Has anyone ever used f2py with the Cray ftn compiler driver? The compiler driver can drive Cray, Gnu, Intel Fortran compilers, including necessary libraries, via loaded modules.
Assuming that this has never been done, or that the existing code to do this is unavailable:
To use Cray ftn with f2py do I need to change the source code under numpy/distutils/fcompiler, as suggested by this blog post?
https://gehrcke.de/2014/02/building-numpy-and-scipy-with-intel-compilers-an…
I would need to create a new file, cray.py under this directory, contain classes for each of the Cray, Gnu and Intel compilers as invoked by the ftn driver.
What other files would I need to change? How would I package tests? How would I contribute the resulting code to NumPy?
All the best, Paul
--
Paul Leopardi https://sites.google.com/site/paulleopardi/

Hi All,
At the moment there are two error types raised when invalid axis arguments
are encountered: IndexError and ValueError. I prefer ValueError for
arguments, IndexError seems more appropriate when the bad axis value is
used as an index. In any case, having mixed error types is inconvenient,
but also inconvenient to change. Should we worry about that? If so, what
should the error be? Note that some of the mixup arises because the axis
values are not checked before use, in which case IndexError is raised.
Chuck

https://github.com/numpy/numpy/pull/7984
Hi everybody,
I created my first pull request for numpy and as mentioned in the numpy
development workflow documentation I hereby post a link to it and a short
description to the mailing list.
Please take a look.
I didn't find a good way to create a contour plot of data of the form:
[(x1, y1, f(x1, y1)), (x2, y2, f(x2, y2)), ..., (xn, yn, f(xn, yn))].
In order to do a contour plot, one has to bring the data into the meshgrid
format.
One possibility would be complicated sorting and reshaping of the data, but
this is not easily possible especially if values are missing (not all
combinations of (x, y) contained in data).
Another way, which is used in all tutorials about contour plotting, is to
create the meshgrid beforehand and than apply the function to the meshgrid
matrices:
x = np.linspace(-3, 3, n)
y = np.linspace(-3, 3, n)
X, Y = np.meshgrid(x, y)
Z = f(X, Y)
plt.contourplot(X, Y, Z)
But if one does not have the function but only the data, this is also no
option.
My function essentially creates a dictionary {(x1, y1): f(x1, y1), (x2,
y2): f(x2, y2), ..., (xn, yn): f(xn, yn)} with the coordinate tuples as
keys and function values as values. Then it creates a meshgrid from all
unique x and y coordinates (X and Y). The dictionary is then used to create
the matrix Z, filling in np.nan for all missing values. This allows to do
the following, with x, y and z being the x, y coordinates and z being the
according function value:
plt.contourplot(*meshgridify(x, y, f=z))
Maybe there is a simpler solution, but I didn't find one.

Hello,
I have mesh (more exactly: just a bunch of nodes) description with values associated to the nodes in a file, e.g. for a
3x3 mesh:
0 0 10
0 0.3 11
0 0.6 12
0.3 0 20
0.3 0.3 21
0.3 0.6 22
0.6 0 30
0.6 0.3 31
0.6 0.6 32
What is best way to read it in and get data structures like the ones I get from np.meshgrid?
Of course, I know about np.loadtxt, but I'm having trouble getting the resulting arrays (x, y, values) in the right form
and to retain association to the values.
Thanks,
Florian