is there a C-API function for numpy which implements Python's
multidimensional indexing? Say, I have a 2d-array
PyArrayObject * M;
and an index
how do I extract the i-th row or column M[i,:] respectively M[:,i]?
I am looking for a function which gives again a PyArrayObject * and
which is a view to M (no copied data; the result should be another
PyArrayObject whose data and strides points to the correct memory
portion of M).
I searched the API documentation, Google and mailing lists for quite a
long time but didn't find anything. Can you help me?
/discussion refers to numpy issue
As far as I can think, the expected functionality of np.array(...) would be
np.array(list(...)) or something even nicer.
Therefore, I like to request a generator/iterator support for np.array(...)
as far as list(...) supports it.
A more detailed reasoning behind this follows now.
In general it seems possible to identify iterators/generators as needed for
this purpose: - someone actually implemented this feature already (see
) - there is ``type.GeneratorType`` and ``collections.abc.Iterator``
for ``isinstance(...)`` check - numpy can destinguish them already from
all other types which get well translated into a numpy array
Given this, I think the general argument goes roughly like the following:
PROS (effect maybe 10% of numpy user or more): - more intuitive overall
behaviour, array(...) = array(list(...)) roughly - python3 compatibility
(see e.g. #5951 <https://github.com/numpy/numpy/issues/5951>) -
compatibility with analog ``__builtin__`` functions (see e.g. #5756
<https://github.com/numpy/numpy/issues/5756>) - all the above make numpy
easier to use in an interactive style (e.g. ipython --pylab) (computation
not that important, however coding time well) CONS (effect less than 0.1%
numpy user I would guess): - might break existing code
which in total, at least for me at this stage, speaks in favour of merging
the already existing
or something similar into numpy master
quite often I work with block matrices. Matlab offers the convenient notation
[ a b; c d ]
to stack matrices. The numpy equivalent is kinda clumsy:
I wrote the little function `stack` that does exactly that:
stack([[a, b], [c, d]])
In my case `stack` replaced `hstack` and `vstack` almost completely.
If you're interested in including it in numpy I created a pull request
. I'm looking forward to getting some feedback!
I've put up a pull request implementing a new function, np.moveaxis, as an
alternative to np.transpose and np.rollaxis:
This functionality has been discussed (even the exact function name)
several times over the years, but it never made it into a pull request. The
most pressing issue is that the behavior of np.rollaxis is not intuitive to
In this pull request, I also allow the source and destination axes to be
sequences as well as scalars. This does not add much complexity to the
code, solves some additional use cases and makes np.moveaxis a proper
generalization of the other axes manipulation routines (see the pull
requests for details).
Best of all, it already works on ndarray duck types (like masked array and
dask.array), because they have already implemented transpose.
I think np.moveaxis would be a useful addition to NumPy -- I've found
myself writing helper functions with a subset of its functionality several
times over the past few years. What do you think?
I made numpy master (numpy-1.11.0.dev0 ,
windows binary wheels available for testing.
Install it with pip:
> pip install -i https://pypi.anaconda.org/carlkl/simple numpy
These builds are compiled with OPENBLAS trunk for BLAS/LAPACK support and
the mingwpy compiler toolchain.
OpenBLAS is deployed within the numpy wheels. To be performant on all usual
CPU architectures OpenBLAS is configured with it's 'dynamic architecture'
and automatic CPU detection.
This version of numpy fakes long double as double just like the MSVC builds.
Some test statistics:
win32 (32 bit)
numpy-1.11.0.dev0, python-2.6: errors=8, failures=1
numpy-1.11.0.dev0, python-2.7: errors=8, failures=1
numpy-1.11.0.dev0, python-3.3: errors=9
numpy-1.11.0.dev0, python-3.4: errors=9
numpy-1.11.0.dev0, python-2.6: errors=9, failures=6
numpy-1.11.0.dev0, python-2.7: errors=9, failures=6
numpy-1.11.0.dev0, python-3.3: errors=10, failures=6
numpy-1.11.0.dev0, python-3.4: errors=10, failures=6
F2py is a great tool, but my impression is that it is being left behind
by the evolution of Fortran from F90 onward. This is unfortunate; it
would be nice to be able to easily wrap new Fortran libraries.
I'm curious: has anyone been looking into what it would take to enable
f2py to handle modern Fortran in general? And into prospects for
getting such an effort funded?
Suppose that I have a vector with the numerical solution of a
differential equation -- more concretely, I am working with evolutionary
game theory models, and the solutions are frequencies of types in a
population that follows the replicator dynamics; but this is probably
Sometimes these solutions are cyclical, yet I sample at points which do
not correspond with the period of the cycle, so that np.allclose()
cannot be directly applied.
Is there any way to check for cycles in this situation?
Thanks for any advice,
TL;DR: There's a pending pull request deprecating some behaviour I find
unexpected. Does anyone object?
Some time ago I noticed that numpy yields unexpected results in some very
specific cases. An array can be used to index multiple elements of a single
>>> a = np.arange(8).reshape((2,2,2))
>>> a[ np.array([, ]) ]
Nonetheless, if a list is used instead, it is (unexpectedly) transformed into a
tuple, resulting in indexing across multiple dimensions:
>>> a[ [, ] ]
I.e., it is interpeted as:
>>> a[ ,  ]
Or what is the same:
>>> a[( ,  )]
I've been informed that there's a pending pull request that deprecates this
behaviour , which could in the future be reverted to what is expected (at
least what I expect) from the documents (except for an obscure note in ).
The discussion leading to this mail can be found here .
"And it's much the same thing with knowledge, for whenever you learn
something new, the whole world becomes that much richer."
-- The Princess of Pure Reason, as told by Norton Juster in The Phantom
Hi Fedora users,
Python distutils on fedora 23 is configured for hardening, hence
`redhat-rpm-config` is a dependency if you want to build numpy, scipy, etc.
The symptom is a "broken toolchain" error. A bug is open for this and it
might get fixed on the Python end, but I don't expect anything soon.