is there a C-API function for numpy which implements Python's
multidimensional indexing? Say, I have a 2d-array
PyArrayObject * M;
and an index
how do I extract the i-th row or column M[i,:] respectively M[:,i]?
I am looking for a function which gives again a PyArrayObject * and
which is a view to M (no copied data; the result should be another
PyArrayObject whose data and strides points to the correct memory
portion of M).
I searched the API documentation, Google and mailing lists for quite a
long time but didn't find anything. Can you help me?
/discussion refers to numpy issue
As far as I can think, the expected functionality of np.array(...) would be
np.array(list(...)) or something even nicer.
Therefore, I like to request a generator/iterator support for np.array(...)
as far as list(...) supports it.
A more detailed reasoning behind this follows now.
In general it seems possible to identify iterators/generators as needed for
this purpose: - someone actually implemented this feature already (see
) - there is ``type.GeneratorType`` and ``collections.abc.Iterator``
for ``isinstance(...)`` check - numpy can destinguish them already from
all other types which get well translated into a numpy array
Given this, I think the general argument goes roughly like the following:
PROS (effect maybe 10% of numpy user or more): - more intuitive overall
behaviour, array(...) = array(list(...)) roughly - python3 compatibility
(see e.g. #5951 <https://github.com/numpy/numpy/issues/5951>) -
compatibility with analog ``__builtin__`` functions (see e.g. #5756
<https://github.com/numpy/numpy/issues/5756>) - all the above make numpy
easier to use in an interactive style (e.g. ipython --pylab) (computation
not that important, however coding time well) CONS (effect less than 0.1%
numpy user I would guess): - might break existing code
which in total, at least for me at this stage, speaks in favour of merging
the already existing
or something similar into numpy master
I hope I am pleased to announce the Numpy 1.11.0b2 release. The first beta
was a damp squib due to missing files in the released source files, this
release fixes that. The new source filese may be downloaded from
sourceforge, no binaries will be released until the mingw tool chain
problems are sorted.
Please test and report any problem.
I'm pleased to announce that Numpy 1.11.0b1
<http://sourceforge.net/projects/numpy/files/NumPy/1.11.0b1/> is now
available on sourceforge. This is a source release as the mingw32 toolchain
is broken. Please test it out and report any errors that you discover.
Hopefully we can do better with 1.11.0 than we did with 1.10.0 ;)
There are now 130 open numpy pull requests and it seems almost impossible
to keep that number down. My personal decision is that I am going to ignore
any new enhancements for the next couple of months and only merge bug
fixes, tests, house keeping (style, docs, deprecations), and old PRs. I
would also request that other maintainers start looking a taking care of
older PRs, either cleaning them up and merging, or closing them.
I recently upgraded NumPy from 1.9.1 to 1.10.4 on Python 2.7.8 by using
pip. As always I specified the paths to Blas, Lapack and Atlas in the
respective environment variables. I used the same compiler I used to
compile both Python and the libraries (GCC 4.6.1). The problem is that
it always tries to get Blas symbols in the wrong library:
>>> import numpy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
packages/numpy/__init__.py", line 180, in <module>
from . import add_newdocs
packages/numpy/add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
packages/numpy/lib/__init__.py", line 8, in <module>
from .type_check import *
packages/numpy/lib/type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
packages/numpy/core/__init__.py", line 14, in <module>
from . import multiarray
packages/numpy/core/multiarray.so: undefined symbol: cblas_sgemm
I also tried to install from source instead of pip but no luck either.
The only way to get it to work is to downgrade to 1.9.1.
Any idea why?
I have created an IQR function to add to the other dispersion metrics
such as standard deviation. I have described the purpose and nature of
the proposal in PR#7137, so I am pasting the text here as well:
This function is used in one place in numpy already (to compute the
Freedman-Diaconis histogram bin estimator) in addition to being
requested on Stack Overflow a couple of times:
It is also used in matplotlib for box and violin plots:
It is a very simple, common and robust dispersion estimator. There
does not appear to be an implementation for it anywhere in numpy or
This function is a convenience combination of `np.percentile` and
`np.subtract`. As such, it allows the the difference between any two
percentiles to be computed, not necessarily (25, 75), which is the
default. All of the recent enhancements to percentile are used.
The documentation and testing is borrowed heavily from `np.percentile`.
Wikipedia Reference: https://en.wikipedia.org/wiki/Interquartile_range
The tests will not pass until the bug-fix for `np.percentile` kwarg
`interpolation='midpoint'` (#7129) is incorporated and this PR is
in my PR about warnings suppression, I currently also have a commit
which bumps the warning stacklevel to two (or three), i.e. use:
(almost) everywhere. This means that for example (take only the empty
would not print:
RuntimeWarning: Mean of empty slice.
warnings.warn("Mean of empty slice.", RuntimeWarning)
but instead print the actual `np.mean()` code line (the repetition of
the warning command is always a bit funny).
The advantage is nicer printing for the user.
The disadvantage would probably mostly be that existing warning filters
that use the `module` keyword argument, will fail.
Any objections/thoughts about doing this change to try to better report
the offending code line? Frankly, I am not sure whether there might be
a python standard about this, but I would expect that for a library
such as numpy, it makes sense to change. But, if downstream uses
warning filters with modules, we might want to reconsider for example.
I'm trying to update the documentation for building Numpy from source, and
I've hit a brick wall in trying to build the library using OpenBLAS because
I can't seem to link the libopenblas.dll file. I tried following the
suggestion of placing the DLL in numpy/core as suggested here
<https://github.com/numpy/numpy/wiki/Mingw-static-toolchain#notes> but it
still doesn't pick it up. What am I doing wrong?