is there a C-API function for numpy which implements Python's
multidimensional indexing? Say, I have a 2d-array
PyArrayObject * M;
and an index
how do I extract the i-th row or column M[i,:] respectively M[:,i]?
I am looking for a function which gives again a PyArrayObject * and
which is a view to M (no copied data; the result should be another
PyArrayObject whose data and strides points to the correct memory
portion of M).
I searched the API documentation, Google and mailing lists for quite a
long time but didn't find anything. Can you help me?
I'm trying to do something that at first glance I think should be simple
but I can't quite figure out how to do it. The problem is as follows:
I have a 3D grid Values[Nx, Ny, Nz]
I want to slice Values at a 2D surface in the Z dimension specified by
Z_index[Nx, Ny] and return a 2D slice[Nx, Ny].
It is not as simple as Values[:,:,Z_index].
I tried this:
(4, 5, 6)
>>> slice = values[:,:,coords]
(4, 5, 4, 5)
>>> slice = np.take(values, coords, axis=2)
(4, 5, 4, 5)
Obviously I could create an empty 2D slice and then fill it by using
np.ndenumerate to fill it point by point by selecting values[i, j,
Z_index[i, j]]. This just seems too inefficient and not very pythonic.
On Wed, 01 May 2013, Sebastian Berg wrote:
> > btw -- is there something like panda's vbench for numpy? i.e. where
> > it would be possible to track/visualize such performance
> > improvements/hits?
> Sorry if it seemed harsh, but only skimmed mails and it seemed a bit
> like the an obvious piece was missing... There are no benchmark tests I
> am aware of. You can try:
> a = np.random.random((1000, 1000))
> and then time a.sum(1) and a.sum(0), on 1.7. the fast axis (1), is only
> slightly faster then the sum over the slow axis. On earlier numpy
> versions you will probably see something like half the speed for the
> slow axis (only got ancient or 1.7 numpy right now, so reluctant to give
> exact timings).
FWIW -- just as a cruel first attempt look at
why float16 case is so special?
I have pushed this really coarse setup (based on some elderly copy of
pandas' vbench) to
if you care to tune it up/extend and then I could fire it up again on
that box (which doesn't do anything else ATM AFAIK). Since majority of
time is spent actually building it (did it with ccache though) it would
be neat if you come up with more of benchmarks to run which you might
think could be interesting/important.
Yaroslav O. Halchenko, Ph.D.
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
Hello list.. I've run into two SVD errors over the last few days. Both
errors are identical in numpy/scipy.
I've submitted a ticket for the 1st problem (numpy ticket #990). Summary
is: some builds of the lapack_lite module linking against system LAPACK
(not the bundled dlapack_lite.o, etc) give a "LinAlgError: SVD did not
converge" exception on my matrix. This error does occur using Mac's
Accelerate framework LAPACK, and a coworker's Ubuntu LAPACK version. It
does not seem to happen using ATLAS LAPACK (nor using Octave/Matlab on
Just today I've come across a negative singular value cropping up in an
SVD of a different matrix. This error does occur on my ATLAS LAPACK based
numpy, as well as on the Ubuntu setup. And once again, it does not happen
I'm using numpy 1.3.0.dev6336 -- don't know what the Ubuntu box is running.
Here are some npy files for the two different cases:
Most of the PR's that were mentioned as desirable for the 1.8 release have
been merged, or look to be merged in the next week or two. The current list
look to severe to me but I suspect that it is incomplete. The major
outstanding issue I see is datetime and I'd like to get a small group
together to work that out. As a start I think such a group should include
Christopher Barker and Wes McKinney. Suggestions for other folks, or even
volunteers, to be part of such a group are welcome.
A lot of stuff has piled up over the last year for inclusion in the 1.8
release, and I'm sure some bugs and regressions have crept in along with
the new code. On that account a lengthy wringing out period is probably
going to be needed, so the sooner we can get the 1.8 branch tagged the
better. I'd like to shoot for getting that done in 2-3 weeks. If there look
to be difficult issues remaining, perhaps they can be worked out at
scipy2013 if there is enough interest.
the current numpy master has deprecated non-integers for the use of
indexing (not-fancy yet). However I think this should be moved further
down in the numpy machinery which means that the conversion utils
provided by numpy would generally raise warnings for non-integers.
This means that for most numpy functions such as reshape, etc. the use
of non-integers for arguments that are naturally integers is deprecated,
which may also affect third party code in principle.
Testing this against current SciPy appearently causes 56 failures (with
deprecation warnings being raised). Are other project similarly affect
by this change?
The branch implementing this can be found in the PR
I noticed that genfromtxt() did not skip comments if the keyword names is
not True. If names is True, then genfromtxt() would take the first line as
the names. I am proposing a fix to genfromtxt that skips all of the
comments in a file, and potentially using the last comment line for names.
This will allow reading files with and without comments and/or names.
The difference is here:
p.s. insert some disclaimer about my first pull request