Hello,
I'm happy to announce the of Numpy 1.7.2.
This is a bugfix only release supporting Python 2.4 - 2.7 and 3.1 - 3.3.
More than 42 issues were fixed, the most important issues are listed in
the release notes:
https://github.com/numpy/numpy/blob/v1.7.2/doc/release/1.7.2-notes.rst
Compared to the last release candidate four additional minor issues have
been fixed and compatibility with python 3.4b1 improved.
Source tarballs, installers and release notes can be found at
https://sourceforge.net/projects/numpy/files/NumPy/1.7.2
Cheers,
Julian Taylor

Hello folks!
I am a newbie, and I want to know how to add a regression test.
Intuitively, I simply executed the corresponding test.py and got errors,
even before having made any changes in the source code. How do I proceed?
Thanking you in anticipation
Janani
(jennystone)

Hi,
Since I've updated numpy from 1.7 to 1.8 with EPD I get segmentation faults whenever I load back pickled float64 arrays. Here's a minimal example:
"""
import numpy
import cPickle
a = numpy.arange(5, dtype='float64')
with open('test.p', 'wb') as fh:
cPickle.dump(a, fh)
with open('test.p') as fh:
a2 = cPickle.load(fh)
print a2
"""
However the above works fine with int32 arrays, i.e. with a = numpy.arange(5).
Does anyone else experience this problem?
Thanks,
--
Hugo Gagnon

Hello,
Need some help in searching arrays (Im new to numpy)
Is it possible to search a array, using another array considering
order/sequence?
x = np.array([1,2,3,4,5,6], np.int32)
y = np.array([1,4,3,2,6,5], np.int32)
query= np.array([1,2,3],np.int32)
x versus query True
y versus query False
Tried with:
np.searchsorted(x,query) -------> array([0, 1, 2])
np.searchsorted(y,query) -------> array([0, 1, 4])
Thanks
--
View this message in context: http://numpy-discussion.10968.n7.nabble.com/Array-search-considering-order-…
Sent from the Numpy-discussion mailing list archive at Nabble.com.

Please have a look at version1 and version2. What are my other options
here? Do I need to go the cython route here? Thanks, Siegfried
==
My array is as follows (shown here for dummy values; and yes this kind
of arrays do exist: 150 observations x 8 years x 366 days x 24 hours x
7 model levels):
data = np.random.random((150,8,366,24,7))
My function "np.apply_along_axis(my_moment,4,data)" takes 15 minutes.
I thought making use of masked arrays "my_moment_fast(data,axis=4)"
will speed up things but
1. It blows up the memory consumption to 6 GB at times
and
2. It also takes... I do not know as I killed it after 20 minutes (it
hangs at the median print statement).
The calculation of the median is the bottleneck here.
==
import numpy as np
def my_moment(data_in,nan_val=-999.0):
tmp = data_in[np.where(data_in<>nan_val)]
erg = np.array([np.min(tmp),np.mean(tmp),np.median(tmp),\
np.max(tmp),np.std(tmp),np.size(tmp)])
return erg
def my_moment_fast(data_in,nan_val=-999.0,axis=4):
import numpy as np
print 'min max',np.min(data_in),np.max(data_in)
mx = np.ma.masked_where((data_in<=0.0)&(data_in<=nan_val), data_in)
print 'shape mx',np.shape(mx),np.min(mx),np.max(mx)
print 'min'
tmp_min = np.ma.min(mx,axis=axis)
print 'max'
tmp_max = np.ma.max(mx,axis=axis)
print 'mean'
tmp_mean = np.ma.mean(mx,axis=axis)
print 'median'
#tmp_median = np.ma.sort(mx,axis=axis)
tmp_median = np.ma.median(mx,axis=axis)
print 'std'
tmp_std = np.ma.std(mx,axis=axis)
print 'N'
tmp_N = np.ones(np.shape(mx))
tmp_N[mx.mask] = 0.0e0
tmp_N = np.ma.sum(tmp_N,axis=axis)
print 'shape min',np.shape(tmp_min),np.min(tmp_min),np.max(tmp_min)
print 'shape max',np.shape(tmp_max),np.min(tmp_max),np.max(tmp_min)
print 'shape mean',np.shape(tmp_mean),np.min(tmp_mean),np.max(tmp_mean)
print 'shape median', np.shape(tmp_median), np.min(tmp_median),
np.max(tmp_median)
print 'shape std',np.shape(tmp_std),np.min(tmp_std),np.max(tmp_std)
print 'shape N', np.shape(tmp_N), np.min(tmp_N), np.max(tmp_N),
np.shape(mx.mask)
return np.array([tmp_min,tmp_mean,tmp_median,tmp_max,tmp_std,tmp_N])
data = np.random.random((150,8,366,24,7))
data[134,5,300,:,2] = -999.0
data[14,3,300,:,0] = -999.0
version1 = my_moment_fast(data,axis=4)
exit()
version2 = np.apply_along_axis(my_moment,4,data)
==
What am I doing wrong here? I haven't tested it againts Fortran and
have got no idea if sorting for fetching the median would be faster.
Thanks,
Siegfried
==
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

Hey,
fixing a corner case indexing regression in 1.8, I noticed/fixed
accidentally this behavior of returning a scalar when indexing a 0-d
array with fields (see also [1]):
arr = np.array((1,), dtype=[('a', 'f8')])
arr['a'] # Returns an array
arr[['a']] # Currently returns a *scalar*
I think no field access should try to convert 0-d arrays to scalars, and
it was probably just an oversight. However, if anyone thinks there is
even the slightest chance that this creates bugs in production code, we
certainly should not change it in a bug-fix only release.
Or am I missing something and the old behavior was actually intended?
- Sebastian
[1] https://github.com/numpy/numpy/issues/4109

I'm learning cuda and decided to use python with ctypes to call all the cuda
functions but I'm finding some memory issues. I've boiled it down to the
simplest scenario. I use ctypes to call a cuda function which allocates
memory on the device and then frees it. This works fine, but if I then try
to use np.dot on a totally other array declared in python, I get a
segmentation fault. Note this only happens if the numpy array is
sufficiently large. If I change the cuda mallocs to simple c mallocs, all
the problems go away, but thats not really helpful. Any ideas what's going
on here?
CUDA CODE (debug.cu):
#include <stdio.h>
#include <stdlib.h>
extern "C" {
void all_together( size_t N)
{
void*d;
int size = N *sizeof(float);
int err;
err = cudaMalloc(&d, size);
if (err != 0) printf("cuda malloc error: %d\n", err);
err = cudaFree(d);
if (err != 0) printf("cuda free error: %d\n", err);
}}
PYTHON CODE (master.py):
import numpy as np
import ctypes
from ctypes import *
dll = ctypes.CDLL('./cuda_lib.so', mode=ctypes.RTLD_GLOBAL)
def build_all_together_f(dll):
func = dll.all_together
func.argtypes = [c_size_t]
return func
__pycu_all_together = build_all_together_f(dll)
if __name__ == '__main__':
N = 5001 # if this is less, the error doesn't show up
a = np.random.randn(N).astype('float32')
da = __pycu_all_together(N)
# toggle this line on/off to get error
#np.dot(a, a)
print 'end of python'
COMPILE: nvcc -Xcompiler -fPIC -shared -o cuda_lib.so debug.cu
RUN: python master.py
--
View this message in context: http://numpy-discussion.10968.n7.nabble.com/numpy-dot-causes-segfault-after…
Sent from the Numpy-discussion mailing list archive at Nabble.com.