is there a C-API function for numpy which implements Python's
multidimensional indexing? Say, I have a 2d-array
PyArrayObject * M;
and an index
how do I extract the i-th row or column M[i,:] respectively M[:,i]?
I am looking for a function which gives again a PyArrayObject * and
which is a view to M (no copied data; the result should be another
PyArrayObject whose data and strides points to the correct memory
portion of M).
I searched the API documentation, Google and mailing lists for quite a
long time but didn't find anything. Can you help me?
Hello list.. I've run into two SVD errors over the last few days. Both
errors are identical in numpy/scipy.
I've submitted a ticket for the 1st problem (numpy ticket #990). Summary
is: some builds of the lapack_lite module linking against system LAPACK
(not the bundled dlapack_lite.o, etc) give a "LinAlgError: SVD did not
converge" exception on my matrix. This error does occur using Mac's
Accelerate framework LAPACK, and a coworker's Ubuntu LAPACK version. It
does not seem to happen using ATLAS LAPACK (nor using Octave/Matlab on
Just today I've come across a negative singular value cropping up in an
SVD of a different matrix. This error does occur on my ATLAS LAPACK based
numpy, as well as on the Ubuntu setup. And once again, it does not happen
I'm using numpy 1.3.0.dev6336 -- don't know what the Ubuntu box is running.
Here are some npy files for the two different cases:
My question is about reading Fortran binary file (oh no this question
Until now, I was using the unpack module like that :
from struct import unpack
"""Reading a Fortran binary file in litte-endian"""
if fourBeginning: f.seek(4,1)
for array in tuple:
for elt in xrange(array.size):
if fourEnd: f.seek(4,1)
After googling, I read that fopen and npfille was deprecated, and we should
use numpy.fromfile and ndarray.tofile, but despite of the documentaion, the
cookbook, the mailling list and google, I don't succed in making a simple
example. Considering the simple Fortran code below what is the Python script
to read the four arrrays? What about if my pc is little endian and the file
I think it will be a good idea to put the Fortran writting-arrays code and
the Python reading-array script in the cookbook and maybe a page to help
people comming from Fortran to start with Python ?
integer :: i,j
do i = 1,nx
do j = 1,ny
ux(i,j) = real(i*j)
uy(i,j) = real(i)/real(j)
p (i,j) = real(i) + real(j)
end program makeArray
The issue tracking discussion seems to have died. Since github issues looks
to be a viable alternative at this point, I propose to turn it on for the
numpy repository and start directing people in that direction.
[Manual PR notification]
---------- Forwarded message ----------
Date: Sat, Jun 9, 2012 at 10:13 PM
Subject: [numpy] ENH: Initial implementation of a 'neighbor' calculation (#303)
To: njsmith <njs(a)pobox.com>
Each element is assigned the result of a function based on it's neighbors.
Neighbors are selected based on a weight array.
It uses the new pad routines to pad arrays if neighboring values are
required that would be off the edge of the input array.
Will be great to have the masked array settled because right now you
can only sort of exclude from the neighborhood using a zero in the
weight array. Zero or np.IGNORE don't affect np.sum, but functions
like np.mean and np.std would give different answers. Because of this
my early implementations of neighbor included an optional mask array
along with the weight array, but I decided would be best to wait for
the new masked arrays.
This in some ways could be considered a generalization of a
convolution, and comparison with existing numpy/scipy convolution
results are included in the tests. The advantage to neighbor is that
any function that accepts a 1-d array, and returns a single result,
can be used instead of convolution only using summation. The
convolution functions require the weight array to be flipped to get
the same answer as neighbor.
You can merge this Pull Request by running:
git pull https://github.com/timcera/numpy neighbor
Or you can view, comment on it, or merge it online at:
-- Commit Summary --
* ENH: Initial implementation of a 'neighbor' calculation where the each
-- File Changes --
M numpy/lib/__init__.py (2)
A numpy/lib/neighbor.py (305)
A numpy/lib/tests/test_neighbor.py (278)
-- Patch Links --
Reply to this email directly or view it on GitHub:
It seems it's possible using e.g.
In : dtype([('foo', str)])Out: dtype([('foo', '|S0')])
to get yourself a zero-length string. However dtype('|S0') results in
a TypeError: data type not understood.
I understand the stupidity of creating a 0-length string field but
it's conceivable that it's accidental.
For example, it could lead to a situation where you've created that
field, are missing all the data you had meant to put in it, serialize
with np.save, and upon np.load aren't able to get _any_ of your data
back because the dtype descriptor is considered bogus (can you guess
why I thought of this scenario?).
It seems that either dtype(str) should do something more sensible than
zero-length string, or it should be possible to create it with dtype('|
S0'). Which should it be?
On Sat, May 12, 2012 at 9:17 PM, Ralf Gommers
> On Sat, May 12, 2012 at 6:22 PM, Sandro Tosi <matrixhasu(a)gmail.com> wrote:
>> On Sat, May 5, 2012 at 8:15 PM, Ralf Gommers
>> <ralf.gommers(a)googlemail.com> wrote:
>> > Hi,
>> > I'm pleased to announce the availability of the first release candidate
>> > NumPy 1.6.2. This is a maintenance release. Due to the delay of the
>> > 1.7.0, this release contains far more fixes than a regular NumPy bugfix
>> > release. It also includes a number of documentation and build
>> > Sources and binary installers can be found at
>> > https://sourceforge.net/projects/numpy/files/NumPy/1.6.2rc1/
>> > Please test this release and report any issues on the numpy-discussion
>> > mailing list.
>> > BLD: add support for the new X11 directory structure on Ubuntu & co.
>> We've just discovered that this fix is not enough. Actually the new
>> directories are due to the "multi-arch" feature of Debian systems,
>> that allows to install libraries from other (foreign) architectures
>> than the one the machine is (the classic example, i386 libraries on a
>> amd64 host).
>> the fix included to look up in additional directories is currently
>> only for X11, while for example Debian has fftw3 that's
>> multi-arch-ified and thus will fail to be detected.
>> Could this fix be extended to include all other things that are
>> checked? for reference the bug in Debian is ; there was also a
>> patch in previous versions, that was using gcc to get the
>> multi-arch paths - you might use as a reference, or to implement
>> something debian-systems-specific.
>>  http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=640940
>> It would be awesome is such support would end up in 1.6.2 .
> Hardcoding some more paths to check in distutils/system_info.py should be
> OK, also for 1.6.2 (will require a new RC).
> The --print-multiarch thing looks very questionable. As far as I can tell,
> it's a Debian specific gcc patch, only available in gcc 4.6 and up. Ubuntu
> before 11.10 release also doesn't have it. Therefore I don't think use of
> --print-multiarch is appropriate for numpy for now, and certainly not a
> change I'd like to make to distutils right before a release.
> If anyone with access to a Debian/Ubuntu system could come up with a patch
> which adds the right paths to system_info.py, that would be great.
Hi, if there's anyone wants to have a look at the above issue this week,
that would be great.
If there's a patch by this weekend I can create a second RC, so we can
still have the final release before the end of this month (needed for
Debian freeze). Otherwise a second RC won't be needed.
Just a heads up that right now views of recarrays seem to be problematic,
this doesn't work anymore:
>>> import statsmodels.api as sm
>>> dta = sm.datasets.macrodata.load() # returns a record array with 14 fields
>>> dta.data[['infl', 'realgdp']].view((float,2))
I opened http://projects.scipy.org/numpy/ticket/2187 for this. Probably a
blocker for 1.7.0.
Question: is that really the recommended way to get an (N, 2) size float
array from two columns of a larger record array? If so, why isn't there a
better way? If you'd want to write to that (N, 2) array you have to append
a copy, making it even uglier. Also, then there really should be tests for
views in test_records.py.