is there a C-API function for numpy which implements Python's
multidimensional indexing? Say, I have a 2d-array
PyArrayObject * M;
and an index
how do I extract the i-th row or column M[i,:] respectively M[:,i]?
I am looking for a function which gives again a PyArrayObject * and
which is a view to M (no copied data; the result should be another
PyArrayObject whose data and strides points to the correct memory
portion of M).
I searched the API documentation, Google and mailing lists for quite a
long time but didn't find anything. Can you help me?
Hello list.. I've run into two SVD errors over the last few days. Both
errors are identical in numpy/scipy.
I've submitted a ticket for the 1st problem (numpy ticket #990). Summary
is: some builds of the lapack_lite module linking against system LAPACK
(not the bundled dlapack_lite.o, etc) give a "LinAlgError: SVD did not
converge" exception on my matrix. This error does occur using Mac's
Accelerate framework LAPACK, and a coworker's Ubuntu LAPACK version. It
does not seem to happen using ATLAS LAPACK (nor using Octave/Matlab on
Just today I've come across a negative singular value cropping up in an
SVD of a different matrix. This error does occur on my ATLAS LAPACK based
numpy, as well as on the Ubuntu setup. And once again, it does not happen
I'm using numpy 1.3.0.dev6336 -- don't know what the Ubuntu box is running.
Here are some npy files for the two different cases:
My question is about reading Fortran binary file (oh no this question
Until now, I was using the unpack module like that :
from struct import unpack
"""Reading a Fortran binary file in litte-endian"""
if fourBeginning: f.seek(4,1)
for array in tuple:
for elt in xrange(array.size):
if fourEnd: f.seek(4,1)
After googling, I read that fopen and npfille was deprecated, and we should
use numpy.fromfile and ndarray.tofile, but despite of the documentaion, the
cookbook, the mailling list and google, I don't succed in making a simple
example. Considering the simple Fortran code below what is the Python script
to read the four arrrays? What about if my pc is little endian and the file
I think it will be a good idea to put the Fortran writting-arrays code and
the Python reading-array script in the cookbook and maybe a page to help
people comming from Fortran to start with Python ?
integer :: i,j
do i = 1,nx
do j = 1,ny
ux(i,j) = real(i*j)
uy(i,j) = real(i)/real(j)
p (i,j) = real(i) + real(j)
end program makeArray
It seems it's possible using e.g.
In : dtype([('foo', str)])Out: dtype([('foo', '|S0')])
to get yourself a zero-length string. However dtype('|S0') results in
a TypeError: data type not understood.
I understand the stupidity of creating a 0-length string field but
it's conceivable that it's accidental.
For example, it could lead to a situation where you've created that
field, are missing all the data you had meant to put in it, serialize
with np.save, and upon np.load aren't able to get _any_ of your data
back because the dtype descriptor is considered bogus (can you guess
why I thought of this scenario?).
It seems that either dtype(str) should do something more sensible than
zero-length string, or it should be possible to create it with dtype('|
S0'). Which should it be?
I am facing an issue upgrading numpy from 1.5.1 to 1.6.1.
In numPy 1.6, the casting behaviour for ufunc has changed and has become
Can someone advise how to implement the below simple example which worked in
1.5.1 but fails in 1.6.1?
>>> import numpy as np
>>> def add(a,b):
... return (a+b)
>>> uadd = np.frompyfunc(add,2,1)
<ufunc 'add (vectorized)'>
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: could not find a matching type for add (vectorized).accumulate,
requested type has type code 'l'
(replying please Cc to me)
There is a module micronumpy that appeared at PyPy source tree:
The great contributions of Justin Peel and Ilya Osadchiy to micronumpy
module revive step-by-step the functionality of numpy.
It would be great if some of numpy-gurus could jump in to assist, contribute
some code and also, perhaps, guide a bit where the things go deeply in numpy
For those who don't know yet much about PyPy:
PyPy is a fast implementation of Python 2.7.
As a rule of thumb, PyPy is currently about 4x times faster than CPython.
Certain benchmarks taken from the real life show 20x speed-up and more.
The successes of PyPy performance are very remarkable http://speed.pypy.org/
Using nditer, is it possible to manually handle dimensions with different
For example, lets say I had an array A[5, 100] and I wanted to sample every
10 along the second axis so I would end up with an array B[5,10]. Is it
possible to do this with nditer, handling the iteration over the second axis
manually of course (probably in cython)?
I want something like this (modified from
cdef np.ndarray[double] x
cdef np.ndarray[double] y
cdef int size
cdef double value
cdef int j
axeslist = list(arr.shape)
axeslist = -1
out = zeros((arr.shape, 10))
it = np.nditer([arr, out], flags=['reduce_ok', 'external_loop',
op_flags=[['readonly'], ['readwrite', 'no_broadcast']],
it.operands[...] = 0
for xarr, yarr in it:
x = xarr
y = yarr
size = x.shape
j = 0
for i in range(size):
#some magic here involving indexing into x[i] and y[j]
Does this make sense? Is it possible to do?
Is this the intended behavior?
>>> from numpy import matlib
>>> m = matlib.reshape([1,2],(2,1))
For any 2d shape, I expected a matrix.
(And probably an exception if the shape is not 2d.)