From david.froger at gmail.com Wed Aug 1 01:30:44 2012 From: david.froger at gmail.com (David Froger) Date: Wed, 1 Aug 2012 07:30:44 +0200 Subject: [Numpy-discussion] SWIG Numpy and C++ extensions In-Reply-To: References: Message-ID: <20120801073044.GC18071@david-desktop.localdomain> > >> 1- How do use "apply" for class functions %apply (bla) myobject::foo ? > >%apply is specified on function/method arguments names and types > > only, > >never on function names. So if for example you use: > >%apply (int* ARGOUT_ARRAY1, int DIM1) {(int* rangevec, int n)} > >it will apply on every functions that have arguments "int* ARGOUT_ARRAY1, > >int DIM1" On Tue, 31 Jul 2012 15:28:02 -0700, "Doutriaux, Charles" wrote: > Thanks David, > > The clarification on apply is actually a important one! And as a consequence, if 2 fonctions have the same (type argname), its hard to apply 2 differents typemaps on it, which can be needed... This recent discussion about it is interessing: http://old.nabble.com/Argument-annotations--td34154195.html From david.froger at gmail.com Wed Aug 1 16:45:49 2012 From: david.froger at gmail.com (David Froger) Date: Wed, 1 Aug 2012 22:45:49 +0200 Subject: [Numpy-discussion] [EXTERNAL] Re: SWIG Numpy and C++ extensions In-Reply-To: References: <20120731220406.GA18071@david-desktop.localdomain> Message-ID: <20120801224549.GA25456@david-desktop.localdomain> On Tue, 31 Jul 2012 14:48:24 -0600, "Bill Spotz" wrote: > Use %inline %{ ... %} around your function. SWIG will add your function directly to the wrapper file as well as add a wrapper function for calling it from python. > > On Jul 31, 2012, at 2:04 PM, David Froger wrote: > > >> 2-that's ok if your C++ deals with arrays but what if I actually want to receive the Numpy object so that I can manipulate it directly (or if for example the array isn't contiguous in memory) > >> > >> An"dummy"example of foo function I'd like to wrap: > >> > >> void FOO::fooNumpy(PyArrayObject *nparray) { > >> > >> int j; > >> for(j=0;jnd;j++) { > >> printf("Ok array dim %i has length: %i\n",j,nparray->dimensions[j]); > >> } > >> } > > ** Bill Spotz ** > ** Sandia National Laboratories Voice: (505)845-0170 ** > ** P.O. Box 5800 Fax: (505)284-0154 ** > ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** I think that the problem will be that Swig does not know the declaration of PyArrayObject, so a SWIGTYPE_p_PyArrayObject type will be generated for the Swig run-time type checker, and numpy array will not be accepted as input. On the other hand, SWIGTYPE_p_PyArrayObject is usefull because if a C function foo return a PyArrayObject * and a function bar take a PyArrayObject * as argument, one could do with the wrapper generated by Swig: array = foo() # array is unusable in Python bar(array) # but it is accepted by bar A solution could be to collect informations about PyArrayObject with '%import "ndarraytypes.h"', but it may be hard to make Swig parse it without errors. Another solution (the test works) could be to define the "in" typemap: %module demo %{ #define SWIG_FILE_WITH_INIT %} %include "numpy.i" %init %{ import_array(); %} %typemap(in,fragment="NumPy_Macros") (PyArrayObject *nparray) { if (! is_array($input)) { PyErr_Format(PyExc_TypeError, "An array is required."); SWIG_fail; } $1 = (PyArrayObject*) $input; } %inline %{ void foo(PyArrayObject *nparray) { int j; for(j = 0; j < nparray->nd ; j++) { printf("Ok array dim %i has length: %i\n",j,nparray->dimensions[j]); } } %} From jrocher at enthought.com Wed Aug 1 22:46:01 2012 From: jrocher at enthought.com (Jonathan Rocher) Date: Wed, 1 Aug 2012 21:46:01 -0500 Subject: [Numpy-discussion] [CONF] Call for papers AMS Jan 6-10 2013 Message-ID: Dear all, Sorry for the cross post. Abstract submission is open for the Third Symposium on Advances in Modeling and Analysis Using Python at the AMS Annual Meeting in Austin, TX, January 6-10, 2013. The call for papers and link to submit an abstract is located here: http://annual.ametsoc.org/2013/index.cfm/programs-and-events/conferences-and-symposia/third-symposium-on-advances-in-modeling-and-analysis-using-python/ The abstract submission deadline is extended to August 28, 2012. We are soliciting papers related to all areas of the use of Python in the atmospheric and oceanic sciences. Please pass along this announcement to your friends and colleagues! Thanks! Regards, Jonathan for the organizing committee -- Jonathan Rocher, PhD Scientific software developer Enthought, Inc. jrocher at enthought.com 1-512-536-1057 http://www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicole.stoffels at forwind.de Thu Aug 2 08:43:31 2012 From: nicole.stoffels at forwind.de (Nicole Stoffels) Date: Thu, 02 Aug 2012 14:43:31 +0200 Subject: [Numpy-discussion] Reordering 2 dimensional array by column Message-ID: <501A75F3.9020701@forwind.de> Dear all, I have a two-dimensional array: a = array([[1,2,3],[0,2,1],[5,7,8]]) I want to reorder it by the last column in descending order, so that I get: b =array([[5, 7, 8],[1, 2, 3],[0, 2, 1]]) What I did first is the following, which reorders the array in ascending order (I found that method in the internet): b = array(sorted(a, key=lambda new_entry: new_entry[2])) b = array([[0, 2, 1],[1, 2, 3],[5, 7, 8]]) But I want it just the other way arround. So I did the following afterwards which results in an array only containing zeros: b_indices = b.argsort() b_matrix = b[b_indices[::-1]] new_b = b_matrix[len(b_matrix)-1] Is there an easy way to reorder it? Or is there at least a complicated way which produces the right output? I hope you can help me! Thanks! Best regards, Nicole From e.antero.tammi at gmail.com Thu Aug 2 08:59:12 2012 From: e.antero.tammi at gmail.com (eat) Date: Thu, 2 Aug 2012 15:59:12 +0300 Subject: [Numpy-discussion] Reordering 2 dimensional array by column In-Reply-To: <501A75F3.9020701@forwind.de> References: <501A75F3.9020701@forwind.de> Message-ID: Hi, On Thu, Aug 2, 2012 at 3:43 PM, Nicole Stoffels wrote: > Dear all, > > I have a two-dimensional array: > > a = array([[1,2,3],[0,2,1],[5,7,8]]) > > I want to reorder it by the last column in descending order, so that I get: > > b =array([[5, 7, 8],[1, 2, 3],[0, 2, 1]]) > Perhaps along the lines: In []: a Out[]: array([[1, 2, 3], [0, 2, 1], [5, 7, 8]]) In []: ndx= a[:, 2].argsort() In []: a[ndx[::-1], :] Out[]: array([[5, 7, 8], [1, 2, 3], [0, 2, 1]]) > > What I did first is the following, which reorders the array in ascending > order (I found that method in the internet): > > b = array(sorted(a, key=lambda new_entry: new_entry[2])) > b = array([[0, 2, 1],[1, 2, 3],[5, 7, 8]]) > > But I want it just the other way arround. So I did the following > afterwards which results in an array only containing zeros: > b_indices = b.argsort() > b_matrix = b[b_indices[::-1]] > new_b = b_matrix[len(b_matrix)-1] > > Is there an easy way to reorder it? Or is there at least a complicated > way which produces the right output? > > I hope you can help me! Thanks! > My 2 cents, -eat > > Best regards, > > Nicole > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicole.stoffels at forwind.de Thu Aug 2 09:08:27 2012 From: nicole.stoffels at forwind.de (Nicole Stoffels) Date: Thu, 02 Aug 2012 15:08:27 +0200 Subject: [Numpy-discussion] Reordering 2 dimensional array by column In-Reply-To: References: <501A75F3.9020701@forwind.de> Message-ID: <501A7BCB.3000905@forwind.de> Thanks eat! It helps and it's so easy! :) On 02.08.2012 14:59, eat wrote: > Hi, > > On Thu, Aug 2, 2012 at 3:43 PM, Nicole Stoffels > > wrote: > > Dear all, > > I have a two-dimensional array: > > a = array([[1,2,3],[0,2,1],[5,7,8]]) > > I want to reorder it by the last column in descending order, so > that I get: > > b =array([[5, 7, 8],[1, 2, 3],[0, 2, 1]]) > > Perhaps along the lines: > In []: a > Out[]: > array([[1, 2, 3], > [0, 2, 1], > [5, 7, 8]]) > In []: ndx= a[:, 2].argsort() > In []: a[ndx[::-1], :] > Out[]: > array([[5, 7, 8], > [1, 2, 3], > [0, 2, 1]]) > > > What I did first is the following, which reorders the array in > ascending > order (I found that method in the internet): > > b = array(sorted(a, key=lambda new_entry: new_entry[2])) > b = array([[0, 2, 1],[1, 2, 3],[5, 7, 8]]) > > But I want it just the other way arround. So I did the following > afterwards which results in an array only containing zeros: > b_indices = b.argsort() > b_matrix = b[b_indices[::-1]] > new_b = b_matrix[len(b_matrix)-1] > > Is there an easy way to reorder it? Or is there at least a complicated > way which produces the right output? > > I hope you can help me! Thanks! > > My 2 cents, > -eat > > > Best regards, > > Nicole > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From irving at naml.us Thu Aug 2 14:46:43 2012 From: irving at naml.us (Geoffrey Irving) Date: Thu, 2 Aug 2012 11:46:43 -0700 Subject: [Numpy-discussion] load of custom .npy file fails with numpy 2.0.0 Message-ID: Hello, The attached .npy file was written from custom C++ code. It loads fine in Numpy 1.6.2 with Python 2.6 installed through MacPorts, but fails on a different machine with Numpy 2.0.0 installed via Superpack: box:array% which python /usr/bin/python box:array% which python box:array% python Python 2.6.1 (r261:67515, Aug 2 2010, 20:10:18) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.load('blah.npy') Traceback (most recent call last): File "", line 1, in File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/lib/npyio.py", line 351, in load return format.read_array(fid) File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/lib/format.py", line 440, in read_array shape, fortran_order, dtype = read_array_header_1_0(fp) File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/lib/format.py", line 361, in read_array_header_1_0 raise ValueError(msg % (d['descr'],)) ValueError: descr is not a valid dtype descriptor: 'd8' >>> numpy.__version__ '2.0.0.dev-b5cdaee' >>> numpy.__file__ '/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/__init__.pyc' It seems Numpy 2.0.0 no longer accepts dtype('d8'): >>> dtype('d8') Traceback (most recent call last): File "", line 1, in TypeError: data type "d8" not understood Was that intentional? An API change isn't too much of a problem, but it's unfortunate if old data files are no longer easily readable. Thanks, Geoffrey -------------- next part -------------- A non-text attachment was scrubbed... Name: blah.npy Type: application/octet-stream Size: 640 bytes Desc: not available URL: From robert.kern at gmail.com Thu Aug 2 16:26:30 2012 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 2 Aug 2012 22:26:30 +0200 Subject: [Numpy-discussion] load of custom .npy file fails with numpy 2.0.0 In-Reply-To: References: Message-ID: On Thu, Aug 2, 2012 at 8:46 PM, Geoffrey Irving wrote: > Hello, > > The attached .npy file was written from custom C++ code. It loads > fine in Numpy 1.6.2 with Python 2.6 installed through MacPorts, but > fails on a different machine with Numpy 2.0.0 installed via Superpack: > > box:array% which python > /usr/bin/python > box:array% which python > box:array% python > Python 2.6.1 (r261:67515, Aug 2 2010, 20:10:18) > [GCC 4.2.1 (Apple Inc. build 5646)] on darwin > Type "help", "copyright", "credits" or "license" for more information. >>>> import numpy >>>> numpy.load('blah.npy') > Traceback (most recent call last): > File "", line 1, in > File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/lib/npyio.py", > line 351, in load > return format.read_array(fid) > File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/lib/format.py", > line 440, in read_array > shape, fortran_order, dtype = read_array_header_1_0(fp) > File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/lib/format.py", > line 361, in read_array_header_1_0 > raise ValueError(msg % (d['descr'],)) > ValueError: descr is not a valid dtype descriptor: 'd8' >>>> numpy.__version__ > '2.0.0.dev-b5cdaee' >>>> numpy.__file__ > '/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/__init__.pyc' > > It seems Numpy 2.0.0 no longer accepts dtype('d8'): > >>>> dtype('d8') > Traceback (most recent call last): > File "", line 1, in > TypeError: data type "d8" not understood > > Was that intentional? An API change isn't too much of a problem, but > it's unfortunate if old data files are no longer easily readable. As far as I can tell, numpy has never described an array using 'd8'. That would be a really old compatibility typecode from Numeric, if I remember correctly. The intention of the NPY format standard was that it would accept what numpy spits out for the descr, not that it would accept absolutely anything that numpy.dtype() can consume, even deprecated aliases (though I will admit that that is almost what the NEP says). In particular, endianness really should be included or else your files will be misread on big-endian machines. My suspicion is that only your code has ever made .npy files with this descr. I feel your pain, Geoff, and I apologize that my lax specification led you down this path, but I think you need to fix your code anyways. -- Robert Kern From damon.mcdougall at gmail.com Thu Aug 2 16:44:53 2012 From: damon.mcdougall at gmail.com (Damon McDougall) Date: Thu, 2 Aug 2012 21:44:53 +0100 Subject: [Numpy-discussion] Licensing question Message-ID: <20120802204453.GI363@quagmire.local> Hi, I have a question about the licence for NumPy's codebase. I am currently writing a library and I'd like to release under some BSD-type licence. Unfortunately, my choice to link against MIT's FFTW library (released under the GPL) means that, in its current state, this is not possible. I'm an avid NumPy user and thought to myself that, since NumPy's licence is BSD, I'd be able to use some of the source code (with due credit, of course) instead of FFTW. Is this possible? I mean, can I redistribute *PART* of NumPy's codebase? Namely, the fftpack.c file? I was under the impression that I could only redistribute BSD source code as a whole and then I read the licence more carefully and it states that I can modify the source to suit my needs. I consider 'redistributing a single file and ignoring the other files' as a 'modification' under the BSD definition, but maybe I'm thinking too wishfully here. Any information on this matter would be greatly appreciated since I am a total code licence noob. Thank you. P.S. Yes, I know I could just release under the GPL, but I don't want to turn people off of packaging my work into a useful product licensed under BSD, or even make money from it. -- Damon McDougall http://damon-is-a-geek.com B2.39 Mathematics Institute University of Warwick Coventry West Midlands CV4 7AL United Kingdom From irving at naml.us Thu Aug 2 17:41:59 2012 From: irving at naml.us (Geoffrey Irving) Date: Thu, 2 Aug 2012 14:41:59 -0700 Subject: [Numpy-discussion] load of custom .npy file fails with numpy 2.0.0 In-Reply-To: References: Message-ID: On Thu, Aug 2, 2012 at 1:26 PM, Robert Kern wrote: > On Thu, Aug 2, 2012 at 8:46 PM, Geoffrey Irving wrote: >> Hello, >> >> The attached .npy file was written from custom C++ code. It loads >> fine in Numpy 1.6.2 with Python 2.6 installed through MacPorts, but >> fails on a different machine with Numpy 2.0.0 installed via Superpack: >> >> box:array% which python >> /usr/bin/python >> box:array% which python >> box:array% python >> Python 2.6.1 (r261:67515, Aug 2 2010, 20:10:18) >> [GCC 4.2.1 (Apple Inc. build 5646)] on darwin >> Type "help", "copyright", "credits" or "license" for more information. >>>>> import numpy >>>>> numpy.load('blah.npy') >> Traceback (most recent call last): >> File "", line 1, in >> File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/lib/npyio.py", >> line 351, in load >> return format.read_array(fid) >> File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/lib/format.py", >> line 440, in read_array >> shape, fortran_order, dtype = read_array_header_1_0(fp) >> File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/lib/format.py", >> line 361, in read_array_header_1_0 >> raise ValueError(msg % (d['descr'],)) >> ValueError: descr is not a valid dtype descriptor: 'd8' >>>>> numpy.__version__ >> '2.0.0.dev-b5cdaee' >>>>> numpy.__file__ >> '/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/__init__.pyc' >> >> It seems Numpy 2.0.0 no longer accepts dtype('d8'): >> >>>>> dtype('d8') >> Traceback (most recent call last): >> File "", line 1, in >> TypeError: data type "d8" not understood >> >> Was that intentional? An API change isn't too much of a problem, but >> it's unfortunate if old data files are no longer easily readable. > > As far as I can tell, numpy has never described an array using 'd8'. > That would be a really old compatibility typecode from Numeric, if I > remember correctly. The intention of the NPY format standard was that > it would accept what numpy spits out for the descr, not that it would > accept absolutely anything that numpy.dtype() can consume, even > deprecated aliases (though I will admit that that is almost what the > NEP says). In particular, endianness really should be included or else > your files will be misread on big-endian machines. > > My suspicion is that only your code has ever made .npy files with this > descr. I feel your pain, Geoff, and I apologize that my lax > specification led you down this path, but I think you need to fix your > code anyways. Sounds good. Both 1.6.2 and 2.0.0 write out ' References: Message-ID: On Thu, Aug 2, 2012 at 11:41 PM, Geoffrey Irving wrote: > On Thu, Aug 2, 2012 at 1:26 PM, Robert Kern wrote: >> On Thu, Aug 2, 2012 at 8:46 PM, Geoffrey Irving wrote: >>> Hello, >>> >>> The attached .npy file was written from custom C++ code. It loads >>> fine in Numpy 1.6.2 with Python 2.6 installed through MacPorts, but >>> fails on a different machine with Numpy 2.0.0 installed via Superpack: >>> >>> box:array% which python >>> /usr/bin/python >>> box:array% which python >>> box:array% python >>> Python 2.6.1 (r261:67515, Aug 2 2010, 20:10:18) >>> [GCC 4.2.1 (Apple Inc. build 5646)] on darwin >>> Type "help", "copyright", "credits" or "license" for more information. >>>>>> import numpy >>>>>> numpy.load('blah.npy') >>> Traceback (most recent call last): >>> File "", line 1, in >>> File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/lib/npyio.py", >>> line 351, in load >>> return format.read_array(fid) >>> File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/lib/format.py", >>> line 440, in read_array >>> shape, fortran_order, dtype = read_array_header_1_0(fp) >>> File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/lib/format.py", >>> line 361, in read_array_header_1_0 >>> raise ValueError(msg % (d['descr'],)) >>> ValueError: descr is not a valid dtype descriptor: 'd8' >>>>>> numpy.__version__ >>> '2.0.0.dev-b5cdaee' >>>>>> numpy.__file__ >>> '/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/__init__.pyc' >>> >>> It seems Numpy 2.0.0 no longer accepts dtype('d8'): >>> >>>>>> dtype('d8') >>> Traceback (most recent call last): >>> File "", line 1, in >>> TypeError: data type "d8" not understood >>> >>> Was that intentional? An API change isn't too much of a problem, but >>> it's unfortunate if old data files are no longer easily readable. >> >> As far as I can tell, numpy has never described an array using 'd8'. >> That would be a really old compatibility typecode from Numeric, if I >> remember correctly. The intention of the NPY format standard was that >> it would accept what numpy spits out for the descr, not that it would >> accept absolutely anything that numpy.dtype() can consume, even >> deprecated aliases (though I will admit that that is almost what the >> NEP says). In particular, endianness really should be included or else >> your files will be misread on big-endian machines. >> >> My suspicion is that only your code has ever made .npy files with this >> descr. I feel your pain, Geoff, and I apologize that my lax >> specification led you down this path, but I think you need to fix your >> code anyways. > > Sounds good. Both 1.6.2 and 2.0.0 write out ' I'll certainly add the '<' bit to signify endianness, but how should I > go about determining the letter? My current code looks like > > // Get dtype info > int bits;char letter; > switch(type_num){ > #define CASE(T) case > NPY_##T:bits=NPY_BITSOF_##T;letter=NPY_##T##LTR;break; > #define NPY_BITSOF_BYTE 8 > #define NPY_BITSOF_UBYTE 8 > #define NPY_BITSOF_USHORT NPY_BITSOF_SHORT > #define NPY_BITSOF_UINT NPY_BITSOF_INT > #define NPY_BITSOF_ULONG NPY_BITSOF_LONG > #define NPY_BITSOF_ULONGLONG NPY_BITSOF_LONGLONG > CASE(BOOL) > CASE(BYTE) > CASE(UBYTE) > CASE(SHORT) > CASE(USHORT) > CASE(INT) > CASE(UINT) > CASE(LONG) > CASE(ULONG) > CASE(LONGLONG) > CASE(ULONGLONG) > CASE(FLOAT) > CASE(DOUBLE) > CASE(LONGDOUBLE) > #undef CASE > default: throw ValueError("Unknown dtype");} > int bytes = bits/8; > ... > len += sprintf(base+len,"{'descr': '%c%d', 'fortran_order': False, > 'shape': (",letter,bytes); > > The code incorrectly assumes that the ...LTR constants are safe ways > to describe dtypes. Is there a clean, correct way to do this that > doesn't require special casing for each type? I can use numpy headers > but can't call any numpy functions, since Python might not be > initialized (e.g., if I'm writing out files through MPI IO collectives > on a Cray). These characters plus the byte-size and endian-ness: /* * These are for dtype 'kinds', not dtype 'typecodes' * as the above are for. */ NPY_GENBOOLLTR ='b', NPY_SIGNEDLTR = 'i', NPY_UNSIGNEDLTR = 'u', NPY_FLOATINGLTR = 'f', NPY_COMPLEXLTR = 'c' Less amenable to macro magic, certainly, but workable. To double-check, see what numpy outputs for each of these cases. -- Robert Kern From irving at naml.us Thu Aug 2 18:16:35 2012 From: irving at naml.us (Geoffrey Irving) Date: Thu, 2 Aug 2012 15:16:35 -0700 Subject: [Numpy-discussion] load of custom .npy file fails with numpy 2.0.0 In-Reply-To: References: Message-ID: On Thu, Aug 2, 2012 at 3:13 PM, Robert Kern wrote: > On Thu, Aug 2, 2012 at 11:41 PM, Geoffrey Irving wrote: >> On Thu, Aug 2, 2012 at 1:26 PM, Robert Kern wrote: >>> On Thu, Aug 2, 2012 at 8:46 PM, Geoffrey Irving wrote: >>>> Hello, >>>> >>>> The attached .npy file was written from custom C++ code. It loads >>>> fine in Numpy 1.6.2 with Python 2.6 installed through MacPorts, but >>>> fails on a different machine with Numpy 2.0.0 installed via Superpack: >>>> >>>> box:array% which python >>>> /usr/bin/python >>>> box:array% which python >>>> box:array% python >>>> Python 2.6.1 (r261:67515, Aug 2 2010, 20:10:18) >>>> [GCC 4.2.1 (Apple Inc. build 5646)] on darwin >>>> Type "help", "copyright", "credits" or "license" for more information. >>>>>>> import numpy >>>>>>> numpy.load('blah.npy') >>>> Traceback (most recent call last): >>>> File "", line 1, in >>>> File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/lib/npyio.py", >>>> line 351, in load >>>> return format.read_array(fid) >>>> File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/lib/format.py", >>>> line 440, in read_array >>>> shape, fortran_order, dtype = read_array_header_1_0(fp) >>>> File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/lib/format.py", >>>> line 361, in read_array_header_1_0 >>>> raise ValueError(msg % (d['descr'],)) >>>> ValueError: descr is not a valid dtype descriptor: 'd8' >>>>>>> numpy.__version__ >>>> '2.0.0.dev-b5cdaee' >>>>>>> numpy.__file__ >>>> '/Library/Python/2.6/site-packages/numpy-2.0.0.dev_b5cdaee_20110710-py2.6-macosx-10.6-universal.egg/numpy/__init__.pyc' >>>> >>>> It seems Numpy 2.0.0 no longer accepts dtype('d8'): >>>> >>>>>>> dtype('d8') >>>> Traceback (most recent call last): >>>> File "", line 1, in >>>> TypeError: data type "d8" not understood >>>> >>>> Was that intentional? An API change isn't too much of a problem, but >>>> it's unfortunate if old data files are no longer easily readable. >>> >>> As far as I can tell, numpy has never described an array using 'd8'. >>> That would be a really old compatibility typecode from Numeric, if I >>> remember correctly. The intention of the NPY format standard was that >>> it would accept what numpy spits out for the descr, not that it would >>> accept absolutely anything that numpy.dtype() can consume, even >>> deprecated aliases (though I will admit that that is almost what the >>> NEP says). In particular, endianness really should be included or else >>> your files will be misread on big-endian machines. >>> >>> My suspicion is that only your code has ever made .npy files with this >>> descr. I feel your pain, Geoff, and I apologize that my lax >>> specification led you down this path, but I think you need to fix your >>> code anyways. >> >> Sounds good. Both 1.6.2 and 2.0.0 write out '> I'll certainly add the '<' bit to signify endianness, but how should I >> go about determining the letter? My current code looks like >> >> // Get dtype info >> int bits;char letter; >> switch(type_num){ >> #define CASE(T) case >> NPY_##T:bits=NPY_BITSOF_##T;letter=NPY_##T##LTR;break; >> #define NPY_BITSOF_BYTE 8 >> #define NPY_BITSOF_UBYTE 8 >> #define NPY_BITSOF_USHORT NPY_BITSOF_SHORT >> #define NPY_BITSOF_UINT NPY_BITSOF_INT >> #define NPY_BITSOF_ULONG NPY_BITSOF_LONG >> #define NPY_BITSOF_ULONGLONG NPY_BITSOF_LONGLONG >> CASE(BOOL) >> CASE(BYTE) >> CASE(UBYTE) >> CASE(SHORT) >> CASE(USHORT) >> CASE(INT) >> CASE(UINT) >> CASE(LONG) >> CASE(ULONG) >> CASE(LONGLONG) >> CASE(ULONGLONG) >> CASE(FLOAT) >> CASE(DOUBLE) >> CASE(LONGDOUBLE) >> #undef CASE >> default: throw ValueError("Unknown dtype");} >> int bytes = bits/8; >> ... >> len += sprintf(base+len,"{'descr': '%c%d', 'fortran_order': False, >> 'shape': (",letter,bytes); >> >> The code incorrectly assumes that the ...LTR constants are safe ways >> to describe dtypes. Is there a clean, correct way to do this that >> doesn't require special casing for each type? I can use numpy headers >> but can't call any numpy functions, since Python might not be >> initialized (e.g., if I'm writing out files through MPI IO collectives >> on a Cray). > > These characters plus the byte-size and endian-ness: > > /* > * These are for dtype 'kinds', not dtype 'typecodes' > * as the above are for. > */ > NPY_GENBOOLLTR ='b', > NPY_SIGNEDLTR = 'i', > NPY_UNSIGNEDLTR = 'u', > NPY_FLOATINGLTR = 'f', > NPY_COMPLEXLTR = 'c' > > Less amenable to macro magic, certainly, but workable. To > double-check, see what numpy outputs for each of these cases. Easy enough to add the two argument macro. Thanks, I should be all set. Geoffrey From fperez.net at gmail.com Thu Aug 2 19:10:40 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 2 Aug 2012 16:10:40 -0700 Subject: [Numpy-discussion] [ANN] SIAM Conference on Computational Science & Engineering Submission Deadlines Approaching! In-Reply-To: References: Message-ID: Dear Colleagues, the SIAM CSE13 conference will be held next year in Boston, and this is a conference that is well suited for much of the type of work that goes on in the open source scientific Python development community (and Julia). The conference is co-chaired by Hans-Petter Langtangen, well known around these parts for his several books on scientific computing with Python and for having led a campus-wide adoption of Python as the core computational foundation across the University of Oslo. I am also on the program committee, as well as Randy LeVeque and other Python-friendly folks. An excellent way to participate is to organize a one- or two-part minisymposium on a specific topic with a group of related speakers (instructions at http://www.siam.org/meetings/cse13/submissions.php). Please note that the MS deadline is fast approaching: August 13, 2012. If you have any further questions, don't hesitate to contact me or one of the other organizers if you feel they can address your concerns more directly: "Fernando Perez" "Randy LeVeque" (Reproducible research track) "Hans Petter Langtangen" (Conference co-chair) "Karen Willcox" (conference chair) ---------- Forwarded message ---------- From: Karen Willcox Date: Tue, Jul 24, 2012 at 6:08 AM Subject: [SIAM-CSE] SIAM Conference on Computational Science & Engineering Submission Deadlines Approaching! To: SIAM-CSE at siam.org *SIAM Conference on Computational Science & Engineering (CSE13)* February 25-March 1, 2013 The Westin Boston Waterfront, Boston, Massachusetts, USA**** ** ** SUBMISSION DEADLINES ARE APPROACHING!**** August 13, 2012: Minisymposium proposals September 10, 2012: Abstracts for contributed and minisymposium speakers**** Visit http://www.siam.org/meetings/cse13/submissions.php to submit.**** ** ** Twitter hashtag: #SIAMcse13**** ** ** For more information about the conference, visit * http://www.siam.org/meetings/cse13/* or contact SIAM Conference Department at meetings at siam.org.**** -- Karen Willcox Professor and Associate Department Head Department of Aeronautics and Astronautics, MIT http://acdl.mit.edu/willcox.html _______________________________________________ SIAM-CSE mailing list To post messages to the list please send them to: SIAM-CSE at siam.org http://lists.siam.org/mailman/listinfo/siam-cse -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Fri Aug 3 01:23:46 2012 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 3 Aug 2012 07:23:46 +0200 Subject: [Numpy-discussion] Licensing question In-Reply-To: <20120802204453.GI363@quagmire.local> References: <20120802204453.GI363@quagmire.local> Message-ID: <20120803052346.GA5957@phare.normalesup.org> On Thu, Aug 02, 2012 at 09:44:53PM +0100, Damon McDougall wrote: > I have a question about the licence for NumPy's codebase. I am currently > writing a library and I'd like to release under some BSD-type licence. > Unfortunately, my choice to link against MIT's FFTW library (released > under the GPL) means that, in its current state, this is not possible. > I'm an avid NumPy user and thought to myself that, since NumPy's licence > is BSD, I'd be able to use some of the source code (with due credit, of > course) instead of FFTW. Is this possible? I mean, can I redistribute > *PART* of NumPy's codebase? Namely, the fftpack.c file? As far as I know, yes. You must give credit to the original source, but that's it. Gael ps: IANAL :) From travis at continuum.io Fri Aug 3 01:52:14 2012 From: travis at continuum.io (Travis Oliphant) Date: Fri, 3 Aug 2012 00:52:14 -0500 Subject: [Numpy-discussion] Licensing question In-Reply-To: <20120802204453.GI363@quagmire.local> References: <20120802204453.GI363@quagmire.local> Message-ID: <49821359-DF1F-4CDF-841C-26F314D20E28@continuum.io> This should be completely fine. The fftpack.h file indicates that fftpack code came from Tela originally anyway and was translated from the Fortran code FFTPACK. Good luck with your project. -Travis On Aug 2, 2012, at 3:44 PM, Damon McDougall wrote: > Hi, > > I have a question about the licence for NumPy's codebase. I am currently > writing a library and I'd like to release under some BSD-type licence. > Unfortunately, my choice to link against MIT's FFTW library (released > under the GPL) means that, in its current state, this is not possible. > I'm an avid NumPy user and thought to myself that, since NumPy's licence > is BSD, I'd be able to use some of the source code (with due credit, of > course) instead of FFTW. Is this possible? I mean, can I redistribute > *PART* of NumPy's codebase? Namely, the fftpack.c file? I was under the > impression that I could only redistribute BSD source code as a whole and > then I read the licence more carefully and it states that I can modify > the source to suit my needs. I consider 'redistributing a single file > and ignoring the other files' as a 'modification' under the BSD > definition, but maybe I'm thinking too wishfully here. > > Any information on this matter would be greatly appreciated since I am a > total code licence noob. > > Thank you. > > P.S. Yes, I know I could just release under the GPL, but I don't want to > turn people off of packaging my work into a useful product licensed > under BSD, or even make money from it. > > -- > Damon McDougall > http://damon-is-a-geek.com > B2.39 > Mathematics Institute > University of Warwick > Coventry > West Midlands > CV4 7AL > United Kingdom > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From ondrej.certik at gmail.com Fri Aug 3 11:03:12 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Fri, 3 Aug 2012 08:03:12 -0700 Subject: [Numpy-discussion] Status of NumPy and Python 3.3 In-Reply-To: <1343692831.2223.171.camel@ronan-desktop> References: <20120728093622.GA27387@sleipnir.bytereef.org> <20120728181956.GA30702@sleipnir.bytereef.org> <1343597223.2223.131.camel@ronan-desktop> <1343610023.2223.156.camel@ronan-desktop> <1343620667.2223.160.camel@ronan-desktop> <1343664653.2223.166.camel@ronan-desktop> <1343667848.2223.167.camel@ronan-desktop> <1343692831.2223.171.camel@ronan-desktop> Message-ID: On Mon, Jul 30, 2012 at 5:00 PM, Ronan Lamy wrote: > Le lundi 30 juillet 2012 ? 11:07 -0700, Ond?ej ?ert?k a ?crit : >> On Mon, Jul 30, 2012 at 10:04 AM, Ronan Lamy wrote: >> > Le lundi 30 juillet 2012 ? 17:10 +0100, Ronan Lamy a ?crit : >> >> Le lundi 30 juillet 2012 ? 04:57 +0100, Ronan Lamy a ?crit : >> >> > Le lundi 30 juillet 2012 ? 02:00 +0100, Ronan Lamy a ?crit : >> >> > >> >> > > >> >> > > Anyway, I managed to compile (by blanking >> >> > > numpy/distutils/command/__init__.py) and to run the tests. I only see >> >> > > the 2 pickle errors from your latest gist. So that's all good! >> >> > >> >> > And the cause of these errors is that running the test suite somehow >> >> > corrupts Python's internal cache of bytes objects, causing the >> >> > following: >> >> > >>> b'\x01XXX'[0:1] >> >> > b'\xbb' >> >> >> >> The culprit is test_pickle_string_overwrite() in test_regression.py. The >> >> test actually tries to check for that kind of problem, but on Python 3, >> >> it only manages to trigger it without detecting it. Here's a simple way >> >> to reproduce the issue: >> >> >> >> >>> a = numpy.array([1], 'b') >> >> >>> b = pickle.loads(pickle.dumps(a)) >> >> >>> b[0] = 77 >> >> >>> b'\x01 '[0:1] >> >> b'M' >> >> >> >> Actually, this problem is probably quite old: I can see it in 1.6.1 w/ >> >> Python 3.2.3. 3.3 only makes it more visible. >> >> >> >> I'll open an issue on GitHub ASAP. >> >> >> > https://github.com/numpy/numpy/issues/370 >> >> Thanks Ronan, nice work! >> >> Since you looked into this -- do you know a way to fix this? (Both >> NumPy and the test.) > > Pauli found out how to fix the code, so I'll try to send a PR tonight. So this PR is now in and the issue is fixed. As far as swapping the unicode issues, I finally understand what is going on and I posted my current understanding into the Python tracker issue (http://bugs.python.org/issue15540) which was recently created for this same issue: http://bugs.python.org/msg167280 but it was determined that it is not a bug in Python so it is closed now. Finally, I have submitted a reworked version of my patch here: https://github.com/numpy/numpy/pull/372 It implements things in a clean way. Ondrej From jim.vickroy at noaa.gov Fri Aug 3 11:18:12 2012 From: jim.vickroy at noaa.gov (Jim Vickroy) Date: Fri, 03 Aug 2012 09:18:12 -0600 Subject: [Numpy-discussion] 2 greatest values, in a 3-d array, along one axis Message-ID: <501BEBB4.7030808@noaa.gov> Hello everyone, I'm trying to determine the 2 greatest values, in a 3-d array, along one axis. Here is an approach: # ------------------------------------------------------ # procedure to determine greatest 2 values for 3rd dimension of 3-d array ... import numpy, numpy.ma xcnt, ycnt, zcnt = 2,3,4 # actual case is (1024, 1024, 8) p0 = numpy.empty ((xcnt,ycnt,zcnt)) for z in range (zcnt) : p0[:,:,z] = z*z zaxis = 2 # max values to be determined for 3rd axis p0max = numpy.max (p0, axis=zaxis) # max values for zaxis maxindices = numpy.argmax (p0, axis=zaxis) # indices of max values p1 = p0.copy() # work array to scan for 2nd highest values j, i = numpy.meshgrid (numpy.arange (ycnt), numpy.arange (xcnt)) p1[i,j,maxindices] = numpy.NaN # flag all max values p1 = numpy.ma.masked_where (numpy.isnan (p1), p1) # hide all max values p1max = numpy.max (p1, axis=zaxis) # 2nd highest values for zaxis # additional code to analyze p0max and p1max goes here # ------------------------------------------------------ I would appreciate feedback on a simpler approach -- e.g., one that does not require masked arrays and or use of magic values like NaN. Thanks, -- jv From davidmenhur at gmail.com Fri Aug 3 11:41:03 2012 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Fri, 3 Aug 2012 16:41:03 +0100 Subject: [Numpy-discussion] 2 greatest values, in a 3-d array, along one axis In-Reply-To: <501BEBB4.7030808@noaa.gov> References: <501BEBB4.7030808@noaa.gov> Message-ID: Here goes a 1D simple implementation. It shouldn't be difficult to generalize to more dimensions, as all the functions support axis argument: >>> a=np.array([1, 2, 3, 5, 2]) >>> a.max() # This is the maximum value 5 >>> mask=np.zeros_like(a) >>> mask[np.argmax(a)]=1 >>> a=np.ma.masked_array(a, mask=mask) >>> a.max() # Second maximum value 3 I am using a masked array, so the structure of the array remains (ie, you can still use it in multi-dimensional arrays). I could have deleted de value, but then that wouldn't be useful for your case. On Fri, Aug 3, 2012 at 4:18 PM, Jim Vickroy wrote: > Hello everyone, > > I'm trying to determine the 2 greatest values, in a 3-d array, along one > axis. > > Here is an approach: > > # ------------------------------------------------------ > # procedure to determine greatest 2 values for 3rd dimension of 3-d > array ... > import numpy, numpy.ma > xcnt, ycnt, zcnt = 2,3,4 # actual case is (1024, 1024, 8) > p0 = numpy.empty ((xcnt,ycnt,zcnt)) > for z in range (zcnt) : p0[:,:,z] = z*z > zaxis = 2 # max > values to be determined for 3rd axis > p0max = numpy.max (p0, axis=zaxis) # max > values for zaxis > maxindices = numpy.argmax (p0, axis=zaxis) # > indices of max values > p1 = p0.copy() # work > array to scan for 2nd highest values > j, i = numpy.meshgrid (numpy.arange (ycnt), numpy.arange > (xcnt)) > p1[i,j,maxindices] = numpy.NaN # flag > all max values > p1 = numpy.ma.masked_where (numpy.isnan (p1), p1) # hide > all max values > p1max = numpy.max (p1, axis=zaxis) # 2nd > highest values for zaxis > # additional code to analyze p0max and p1max goes here > # ------------------------------------------------------ > > I would appreciate feedback on a simpler approach -- e.g., one that does > not require masked arrays and or use of magic values like NaN. > > Thanks, > -- jv > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From davidmenhur at gmail.com Fri Aug 3 12:01:18 2012 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Fri, 3 Aug 2012 17:01:18 +0100 Subject: [Numpy-discussion] 2 greatest values, in a 3-d array, along one axis In-Reply-To: References: <501BEBB4.7030808@noaa.gov> Message-ID: Here is the 3D implementation: http://pastebin.com/ERVLWhbS I was only able to do it using a double nested loop, but I am sure someone more clever than me can do a slicing trick to overcome it. Otherwise, I hope this is fast enough for your purpose. David. On Fri, Aug 3, 2012 at 4:41 PM, Da?id wrote: > Here goes a 1D simple implementation. It shouldn't be difficult to > generalize to more dimensions, as all the functions support axis > argument: > >>>> a=np.array([1, 2, 3, 5, 2]) >>>> a.max() # This is the maximum value > 5 >>>> mask=np.zeros_like(a) >>>> mask[np.argmax(a)]=1 >>>> a=np.ma.masked_array(a, mask=mask) >>>> a.max() # Second maximum value > 3 > > I am using a masked array, so the structure of the array remains (ie, > you can still use it in multi-dimensional arrays). I could have > deleted de value, but then that wouldn't be useful for your case. > > On Fri, Aug 3, 2012 at 4:18 PM, Jim Vickroy wrote: >> Hello everyone, >> >> I'm trying to determine the 2 greatest values, in a 3-d array, along one >> axis. >> >> Here is an approach: >> >> # ------------------------------------------------------ >> # procedure to determine greatest 2 values for 3rd dimension of 3-d >> array ... >> import numpy, numpy.ma >> xcnt, ycnt, zcnt = 2,3,4 # actual case is (1024, 1024, 8) >> p0 = numpy.empty ((xcnt,ycnt,zcnt)) >> for z in range (zcnt) : p0[:,:,z] = z*z >> zaxis = 2 # max >> values to be determined for 3rd axis >> p0max = numpy.max (p0, axis=zaxis) # max >> values for zaxis >> maxindices = numpy.argmax (p0, axis=zaxis) # >> indices of max values >> p1 = p0.copy() # work >> array to scan for 2nd highest values >> j, i = numpy.meshgrid (numpy.arange (ycnt), numpy.arange >> (xcnt)) >> p1[i,j,maxindices] = numpy.NaN # flag >> all max values >> p1 = numpy.ma.masked_where (numpy.isnan (p1), p1) # hide >> all max values >> p1max = numpy.max (p1, axis=zaxis) # 2nd >> highest values for zaxis >> # additional code to analyze p0max and p1max goes here >> # ------------------------------------------------------ >> >> I would appreciate feedback on a simpler approach -- e.g., one that does >> not require masked arrays and or use of magic values like NaN. >> >> Thanks, >> -- jv >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion From amcmorl at gmail.com Fri Aug 3 12:02:49 2012 From: amcmorl at gmail.com (Angus McMorland) Date: Fri, 3 Aug 2012 12:02:49 -0400 Subject: [Numpy-discussion] 2 greatest values, in a 3-d array, along one axis In-Reply-To: <501BEBB4.7030808@noaa.gov> References: <501BEBB4.7030808@noaa.gov> Message-ID: On 3 August 2012 11:18, Jim Vickroy wrote: > Hello everyone, > > I'm trying to determine the 2 greatest values, in a 3-d array, along one > axis. > > Here is an approach: > > # ------------------------------------------------------ > # procedure to determine greatest 2 values for 3rd dimension of 3-d > array ... > import numpy, numpy.ma > xcnt, ycnt, zcnt = 2,3,4 # actual case is (1024, 1024, 8) > p0 = numpy.empty ((xcnt,ycnt,zcnt)) > for z in range (zcnt) : p0[:,:,z] = z*z > zaxis = 2 # max > values to be determined for 3rd axis > p0max = numpy.max (p0, axis=zaxis) # max > values for zaxis > maxindices = numpy.argmax (p0, axis=zaxis) # > indices of max values > p1 = p0.copy() # work > array to scan for 2nd highest values > j, i = numpy.meshgrid (numpy.arange (ycnt), numpy.arange > (xcnt)) > p1[i,j,maxindices] = numpy.NaN # flag > all max values > p1 = numpy.ma.masked_where (numpy.isnan (p1), p1) # hide > all max values > p1max = numpy.max (p1, axis=zaxis) # 2nd > highest values for zaxis > # additional code to analyze p0max and p1max goes here > # ------------------------------------------------------ > > I would appreciate feedback on a simpler approach -- e.g., one that does > not require masked arrays and or use of magic values like NaN. > > Thanks, > -- jv > Here's a way that only uses argsort and fancy indexing: >>>a = np.random.randint(10, size=(3,3,3)) >>>print a [[[0 3 8] [4 2 8] [8 6 3]] [[0 6 7] [0 3 9] [0 9 1]] [[7 9 7] [5 2 9] [9 3 3]]] >>>am = a.argsort(axis=2) >>>maxs = a[np.arange(a.shape[0])[:,None], np.arange(a.shape[1])[None], am[:,:,-1]] >>>print maxs [[8 8 8] [7 9 9] [9 9 9]] >>>seconds = a[np.arange(a.shape[0])[:,None], np.arange(a.shape[1])[None], am[:,:,-2]] >>>print seconds [[3 4 6] [6 3 1] [7 5 3]] And to double check: >>>i, j = 0, 1 >>>l = a[i, j,:] >>>print l [4 2 8] >>>print np.max(a[i,j,:]), maxs[i,j] 8 8 >>>print l[np.argsort(l)][-2], second[i,j] 4 4 Good luck. Angus. -- AJC McMorland Post-doctoral research fellow Neurobiology, University of Pittsburgh -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim.vickroy at noaa.gov Fri Aug 3 14:18:56 2012 From: jim.vickroy at noaa.gov (Jim Vickroy) Date: Fri, 03 Aug 2012 12:18:56 -0600 Subject: [Numpy-discussion] 2 greatest values, in a 3-d array, along one axis In-Reply-To: References: <501BEBB4.7030808@noaa.gov> Message-ID: <501C1610.4040001@noaa.gov> Thanks for each of the improved solutions. The one using argsort took a little while for me to understand. I have a long way to go to fully utilize fancy indexing! -- jv On 8/3/2012 10:02 AM, Angus McMorland wrote: > On 3 August 2012 11:18, Jim Vickroy > wrote: > > Hello everyone, > > I'm trying to determine the 2 greatest values, in a 3-d array, > along one > axis. > > Here is an approach: > > # ------------------------------------------------------ > # procedure to determine greatest 2 values for 3rd dimension of 3-d > array ... > import numpy, numpy.ma > xcnt, ycnt, zcnt = 2,3,4 # actual case is (1024, 1024, 8) > p0 = numpy.empty ((xcnt,ycnt,zcnt)) > for z in range (zcnt) : p0[:,:,z] = z*z > zaxis = 2 > # max > values to be determined for 3rd axis > p0max = numpy.max (p0, axis=zaxis) > # max > values for zaxis > maxindices = numpy.argmax (p0, axis=zaxis) # > indices of max values > p1 = p0.copy() > # work > array to scan for 2nd highest values > j, i = numpy.meshgrid (numpy.arange (ycnt), numpy.arange > (xcnt)) > p1[i,j,maxindices] = numpy.NaN > # flag > all max values > p1 = numpy.ma.masked_where (numpy.isnan (p1), p1) > # hide > all max values > p1max = numpy.max (p1, axis=zaxis) > # 2nd > highest values for zaxis > # additional code to analyze p0max and p1max goes here > # ------------------------------------------------------ > > I would appreciate feedback on a simpler approach -- e.g., one > that does > not require masked arrays and or use of magic values like NaN. > > Thanks, > -- jv > > > Here's a way that only uses argsort and fancy indexing: > > >>>a = np.random.randint(10, size=(3,3,3)) > >>>print a > > [[[0 3 8] > [4 2 8] > [8 6 3]] > > [[0 6 7] > [0 3 9] > [0 9 1]] > > [[7 9 7] > [5 2 9] > [9 3 3]]] > > >>>am = a.argsort(axis=2) > >>>maxs = a[np.arange(a.shape[0])[:,None], > np.arange(a.shape[1])[None], am[:,:,-1]] > >>>print maxs > > [[8 8 8] > [7 9 9] > [9 9 9]] > > >>>seconds = a[np.arange(a.shape[0])[:,None], > np.arange(a.shape[1])[None], am[:,:,-2]] > >>>print seconds > > [[3 4 6] > [6 3 1] > [7 5 3]] > > And to double check: > > >>>i, j = 0, 1 > >>>l = a[i, j,:] > >>>print l > > [4 2 8] > > >>>print np.max(a[i,j,:]), maxs[i,j] > > 8 8 > > >>>print l[np.argsort(l)][-2], second[i,j] > > 4 4 > > Good luck. > > Angus. > -- > AJC McMorland > Post-doctoral research fellow > Neurobiology, University of Pittsburgh > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.s.seljebotn at astro.uio.no Fri Aug 3 14:57:15 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Fri, 03 Aug 2012 20:57:15 +0200 Subject: [Numpy-discussion] Licensing question In-Reply-To: <20120802204453.GI363@quagmire.local> References: <20120802204453.GI363@quagmire.local> Message-ID: <501C1F0B.8020805@astro.uio.no> On 08/02/2012 10:44 PM, Damon McDougall wrote: > Hi, > > I have a question about the licence for NumPy's codebase. I am currently > writing a library and I'd like to release under some BSD-type licence. > Unfortunately, my choice to link against MIT's FFTW library (released > under the GPL) means that, in its current state, this is not possible. > I'm an avid NumPy user and thought to myself that, since NumPy's licence > is BSD, I'd be able to use some of the source code (with due credit, of > course) instead of FFTW. Is this possible? I mean, can I redistribute > *PART* of NumPy's codebase? Namely, the fftpack.c file? I was under the > impression that I could only redistribute BSD source code as a whole and > then I read the licence more carefully and it states that I can modify > the source to suit my needs. I consider 'redistributing a single file > and ignoring the other files' as a 'modification' under the BSD > definition, but maybe I'm thinking too wishfully here. > > Any information on this matter would be greatly appreciated since I am a > total code licence noob. > > Thank you. > > P.S. Yes, I know I could just release under the GPL, but I don't want to > turn people off of packaging my work into a useful product licensed > under BSD, or even make money from it. Not related to licensing, but here's another port of FFTPACK to C by Martin Reinecke, licensed under BSD. The README has the links to the original Fortran sources that this is based on. https://github.com/dagss/libfftpack Dag From e.antero.tammi at gmail.com Fri Aug 3 15:09:13 2012 From: e.antero.tammi at gmail.com (eat) Date: Fri, 3 Aug 2012 22:09:13 +0300 Subject: [Numpy-discussion] 2 greatest values, in a 3-d array, along one axis In-Reply-To: References: <501BEBB4.7030808@noaa.gov> Message-ID: Hi, On Fri, Aug 3, 2012 at 7:02 PM, Angus McMorland wrote: > On 3 August 2012 11:18, Jim Vickroy wrote: > >> Hello everyone, >> >> I'm trying to determine the 2 greatest values, in a 3-d array, along one >> axis. >> >> Here is an approach: >> >> # ------------------------------------------------------ >> # procedure to determine greatest 2 values for 3rd dimension of 3-d >> array ... >> import numpy, numpy.ma >> xcnt, ycnt, zcnt = 2,3,4 # actual case is (1024, 1024, 8) >> p0 = numpy.empty ((xcnt,ycnt,zcnt)) >> for z in range (zcnt) : p0[:,:,z] = z*z >> zaxis = 2 # max >> values to be determined for 3rd axis >> p0max = numpy.max (p0, axis=zaxis) # max >> values for zaxis >> maxindices = numpy.argmax (p0, axis=zaxis) # >> indices of max values >> p1 = p0.copy() # work >> array to scan for 2nd highest values >> j, i = numpy.meshgrid (numpy.arange (ycnt), numpy.arange >> (xcnt)) >> p1[i,j,maxindices] = numpy.NaN # flag >> all max values >> p1 = numpy.ma.masked_where (numpy.isnan (p1), p1) # hide >> all max values >> p1max = numpy.max (p1, axis=zaxis) # 2nd >> highest values for zaxis >> # additional code to analyze p0max and p1max goes here >> # ------------------------------------------------------ >> >> I would appreciate feedback on a simpler approach -- e.g., one that does >> not require masked arrays and or use of magic values like NaN. >> >> Thanks, >> -- jv >> > > Here's a way that only uses argsort and fancy indexing: > > >>>a = np.random.randint(10, size=(3,3,3)) > >>>print a > > [[[0 3 8] > [4 2 8] > [8 6 3]] > > [[0 6 7] > [0 3 9] > [0 9 1]] > > [[7 9 7] > [5 2 9] > [9 3 3]]] > > >>>am = a.argsort(axis=2) > >>>maxs = a[np.arange(a.shape[0])[:,None], np.arange(a.shape[1])[None], > am[:,:,-1]] > >>>print maxs > > [[8 8 8] > [7 9 9] > [9 9 9]] > > >>>seconds = a[np.arange(a.shape[0])[:,None], np.arange(a.shape[1])[None], > am[:,:,-2]] > >>>print seconds > > [[3 4 6] > [6 3 1] > [7 5 3]] > > And to double check: > > >>>i, j = 0, 1 > >>>l = a[i, j,:] > >>>print l > > [4 2 8] > > >>>print np.max(a[i,j,:]), maxs[i,j] > > 8 8 > > >>>print l[np.argsort(l)][-2], second[i,j] > > 4 4 > > Good luck. > Here the np.indicies function may help a little bit, like: In []: a= randint(10, size= (3, 2, 4)) In []: a Out[]: array([[[1, 9, 6, 6], [0, 3, 4, 2]], [[4, 2, 4, 4], [5, 9, 4, 4]], [[6, 1, 4, 3], [5, 4, 5, 5]]]) In []: ndx= indices(a.shape) In []: # largest In []: a[a.argsort(0), ndx[1], ndx[2]][-1] Out[]: array([[6, 9, 6, 6], [5, 9, 5, 5]]) In []: # second largest In []: a[a.argsort(0), ndx[1], ndx[2]][-2] Out[]: array([[4, 2, 4, 4], [5, 4, 4, 4]]) My 2 cents, -eat > > Angus. > -- > AJC McMorland > Post-doctoral research fellow > Neurobiology, University of Pittsburgh > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis at continuum.io Fri Aug 3 21:03:17 2012 From: travis at continuum.io (Travis Oliphant) Date: Fri, 3 Aug 2012 20:03:17 -0500 Subject: [Numpy-discussion] Unicode revisited Message-ID: Hey all, Ondrej has been working hard with feedback from many others on improving Unicode support in NumPy (especially for Python 3.3). Looking at what Python has done in Python 3.3 (PEP 393) and chatting on the Python issue tracker with the author of that PEP has made me wonder if we aren't "doing the wrong thing" in NumPy quite often. Basically, NumPy only supports UTF-32 in it's Unicode representation. All bytes in NumPy arrays should be either UTF-32LE or UTF-32BE. This is all pretty easy to understand as long as you stick with NumPy arrays only. The difficulty starts when you start to interact with the unicode array scalar (which is the same data-structure exactly as a Python unicode object with a different type-name --- numpy.unicode_). However, I overlooked the "encoding" argument to the standard "unicode" constructor which might have simplified what we are doing. If I understand things correctly, now, all we need to do is to "decode" the UTF-32LE or UTF-32BE raw bytes in the array (depending on the dtype) into a unicode object. This is easily accomplished with numpy.unicode_(, 'utf_32_be' or 'utf_32_le'). There is also an "encoding" equivalent to go from the Python unicode object to the bytes representation in the NumPy array. I think this is what we should be doing in most of the places and it should considerably simplify the Unicode code in NumPy --- eliminating possibly the ucsnarrow.c file. Am I missing something? Thanks, -Travis From charlesr.harris at gmail.com Fri Aug 3 22:05:18 2012 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 3 Aug 2012 20:05:18 -0600 Subject: [Numpy-discussion] Unicode revisited In-Reply-To: References: Message-ID: On Fri, Aug 3, 2012 at 7:03 PM, Travis Oliphant wrote: > Hey all, > > Ondrej has been working hard with feedback from many others on improving > Unicode support in NumPy (especially for Python 3.3). Looking at what > Python has done in Python 3.3 (PEP 393) and chatting on the Python issue > tracker with the author of that PEP has made me wonder if we aren't "doing > the wrong thing" in NumPy quite often. > > Basically, NumPy only supports UTF-32 in it's Unicode representation. > All bytes in NumPy arrays should be either UTF-32LE or UTF-32BE. This is > all pretty easy to understand as long as you stick with NumPy arrays only. > > The difficulty starts when you start to interact with the unicode array > scalar (which is the same data-structure exactly as a Python unicode object > with a different type-name --- numpy.unicode_). However, I overlooked > the "encoding" argument to the standard "unicode" constructor which might > have simplified what we are doing. If I understand things correctly, > now, all we need to do is to "decode" the UTF-32LE or UTF-32BE raw bytes in > the array (depending on the dtype) into a unicode object. > > This is easily accomplished with numpy.unicode_(, > 'utf_32_be' or 'utf_32_le'). There is also an "encoding" equivalent to > go from the Python unicode object to the bytes representation in the NumPy > array. I think this is what we should be doing in most of the places and > it should considerably simplify the Unicode code in NumPy --- eliminating > possibly the ucsnarrow.c file. > > Am I missing something? > > I can't comment on the rest, but I'd be happy to see the end of the ucsnarrow.c file. It needs more work to be properly generalized and if there is a way to avoid that, so much the better. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej.certik at gmail.com Fri Aug 3 23:03:14 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Fri, 3 Aug 2012 20:03:14 -0700 Subject: [Numpy-discussion] Unicode revisited In-Reply-To: References: Message-ID: On Fri, Aug 3, 2012 at 6:03 PM, Travis Oliphant wrote: > Hey all, > > Ondrej has been working hard with feedback from many others on improving Unicode support in NumPy (especially for Python 3.3). Looking at what Python has done in Python 3.3 (PEP 393) and chatting on the Python issue tracker with the author of that PEP has made me wonder if we aren't "doing the wrong thing" in NumPy quite often. > > Basically, NumPy only supports UTF-32 in it's Unicode representation. All bytes in NumPy arrays should be either UTF-32LE or UTF-32BE. This is all pretty easy to understand as long as you stick with NumPy arrays only. > > The difficulty starts when you start to interact with the unicode array scalar (which is the same data-structure exactly as a Python unicode object with a different type-name --- numpy.unicode_). However, I overlooked the "encoding" argument to the standard "unicode" constructor which might have simplified what we are doing. If I understand things correctly, now, all we need to do is to "decode" the UTF-32LE or UTF-32BE raw bytes in the array (depending on the dtype) into a unicode object. > > This is easily accomplished with numpy.unicode_(, 'utf_32_be' or 'utf_32_le'). There is also an "encoding" equivalent to go from the Python unicode object to the bytes representation in the NumPy array. I think this is what we should be doing in most of the places and it should considerably simplify the Unicode code in NumPy --- eliminating possibly the ucsnarrow.c file. > > Am I missing something? I guess we'll try and see. :) Would it make sense to merge https://github.com/numpy/numpy/pull/372 now, because it will make NumPy working in Python 3.3 (and it seems to me that the implementation is reasonable)? And then I'll work on trying to use your new approach, both for 2.7 and 3.2 and 3.3. Ondrej From stefan-usenet at bytereef.org Sat Aug 4 06:42:09 2012 From: stefan-usenet at bytereef.org (Stefan Krah) Date: Sat, 4 Aug 2012 12:42:09 +0200 Subject: [Numpy-discussion] Unicode revisited In-Reply-To: References: Message-ID: <20120804104209.GA13393@sleipnir.bytereef.org> Travis Oliphant wrote: > The difficulty starts when you start to interact with the unicode array scalar (which is the same data-structure exactly as a Python unicode object with a different type-name --- numpy.unicode_). However, I overlooked the "encoding" argument to the standard "unicode" constructor which might have simplified what we are doing. If I understand things correctly, now, all we need to do is to "decode" the UTF-32LE or UTF-32BE raw bytes in the array (depending on the dtype) into a unicode object. > > This is easily accomplished with numpy.unicode_(, 'utf_32_be' or 'utf_32_le'). There is also an "encoding" equivalent to go from the Python unicode object to the bytes representation in the NumPy array. I think this is what we should be doing in most of the places and it should considerably simplify the Unicode code in NumPy --- eliminating possibly the ucsnarrow.c file. That sounds right to me. On the C-level for PyArray_Scalar() this should work for all Python versions >= 2.6, provided that data is aligned in the case of a narrow build: /* data is assumed to be aligned */ if (type_num == NPY_UNICODE) { PyObject *u; PyObject *args; int byteorder; switch (descr->byteorder) { case '<': byteorder = -1; case '>': byteorder = 1; default: /* '=', '|' */ byteorder = 0; } /* function exists since 2.6 */ u = PyUnicode_DecodeUTF32(data, itemsize, NULL, &byteorder); if (u == NULL) { return NULL; } args = Py_BuildValue("(N)", u); if (args == NULL) { return NULL; } u = type->tp_new(type, args, NULL); Py_DECREF(args); return u; } All newbyteorder() test have to be deleted of course. I also think that ucsnarrow.c is no longer needed. Stefan Krah From cournape at gmail.com Sat Aug 4 07:04:35 2012 From: cournape at gmail.com (David Cournapeau) Date: Sat, 4 Aug 2012 12:04:35 +0100 Subject: [Numpy-discussion] Moving away from using accelerate framework on mac os x ? Message-ID: Hi, During last PyCon, Olivier Grisel (from scikits-learn fame) and myself looked into a nasty bug on mac os x: https://gist.github.com/2027412. The short story is that I believe this means numpy cannot be used with multiprocessing if linked against accelerate framework, and as such we should think about giving up on accelerate, and use e.g. ATLAS on mac for our official binaries. Long story: we recently received a answer where the engineers mention that using blas on each 'side' of a fork is not supported. The meat of the email is attached below thoughts ? David ---------- Forwarded message ---------- From: Date: 2012/8/2 Subject: Bug ID 11036478: Segfault when calling dgemm with Accelerate / GCD after in a forked process To: olivier.grisel at gmail.com Hi Olivier, Thank you for contacting us regarding Bug ID# 11036478. Thank you for filing this bug report. This usage of fork() is not supported on our platform. For API outside of POSIX, including GCD and technologies like Accelerate, we do not support usage on both sides of a fork(). For this reason among others, use of fork() without exec is discouraged in general in processes that use layers above POSIX. We recommend that you either restrict usage of blas to the parent or the child process but not both, or that you switch to using GCD or pthreads rather than forking to create parallelism. From aron at ahmadia.net Sat Aug 4 07:14:40 2012 From: aron at ahmadia.net (Aron Ahmadia) Date: Sat, 4 Aug 2012 14:14:40 +0300 Subject: [Numpy-discussion] Moving away from using accelerate framework on mac os x ? In-Reply-To: References: Message-ID: Hi David, Apple's response here is somewhat confusing, but I will add that on the supercomputing side of things we rarely fork, as this is not well-supported from the vendors or the hardware (it's hard enough to performantly spawn 500,000 processes statically, doing this dynamically becomes even more challenging). This sounds like an issue in Python multiprocessing itself, as I guess many other Apple libraries will fail or crash with the fork-no-exec model. My suggestion would be that numpy continue to integrate with Accelerate but prefer a macports or brew supplied blas, if available. This should probably also be filed as a wont-fix bug on the tracker so anybody who hits the same problem knows that it's on the system side and not us. A On Sat, Aug 4, 2012 at 2:04 PM, David Cournapeau wrote: > Hi, > > During last PyCon, Olivier Grisel (from scikits-learn fame) and myself > looked into a nasty bug on mac os x: https://gist.github.com/2027412. > The short story is that I believe this means numpy cannot be used with > multiprocessing if linked against accelerate framework, and as such we > should think about giving up on accelerate, and use e.g. ATLAS on mac > for our official binaries. > > Long story: we recently received a answer where the engineers mention > that using blas on each 'side' of a fork is not supported. The meat of > the email is attached below > > thoughts ? > > David > > ---------- Forwarded message ---------- > From: > Date: 2012/8/2 > Subject: Bug ID 11036478: Segfault when calling dgemm with Accelerate > / GCD after in a forked process > To: olivier.grisel at gmail.com > > > Hi Olivier, > > Thank you for contacting us regarding Bug ID# 11036478. > > Thank you for filing this bug report. > > This usage of fork() is not supported on our platform. > > For API outside of POSIX, including GCD and technologies like > Accelerate, we do not support usage on both sides of a fork(). For > this reason among others, use of fork() without exec is discouraged in > general in processes that use layers above POSIX. > > We recommend that you either restrict usage of blas to the parent or > the child process but not both, or that you switch to using GCD or > pthreads rather than forking to create parallelism. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sat Aug 4 07:49:24 2012 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 4 Aug 2012 12:49:24 +0100 Subject: [Numpy-discussion] Unicode revisited In-Reply-To: <20120804104209.GA13393@sleipnir.bytereef.org> References: <20120804104209.GA13393@sleipnir.bytereef.org> Message-ID: On Sat, Aug 4, 2012 at 11:42 AM, Stefan Krah wrote: > switch (descr->byteorder) { > case '<': > byteorder = -1; > case '>': > byteorder = 1; > default: /* '=', '|' */ > byteorder = 0; > } I think you might want some breaks in here... -n From stefan-usenet at bytereef.org Sat Aug 4 07:58:53 2012 From: stefan-usenet at bytereef.org (Stefan Krah) Date: Sat, 4 Aug 2012 13:58:53 +0200 Subject: [Numpy-discussion] Unicode revisited In-Reply-To: References: <20120804104209.GA13393@sleipnir.bytereef.org> Message-ID: <20120804115853.GA14152@sleipnir.bytereef.org> Nathaniel Smith wrote: > On Sat, Aug 4, 2012 at 11:42 AM, Stefan Krah wrote: > > switch (descr->byteorder) { > > case '<': > > byteorder = -1; > > case '>': > > byteorder = 1; > > default: /* '=', '|' */ > > byteorder = 0; > > } > > I think you might want some breaks in here... Indeed. Shame on me for posting quick-and-dirty code. Stefan Krah From d.s.seljebotn at astro.uio.no Sat Aug 4 08:31:40 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Sat, 04 Aug 2012 14:31:40 +0200 Subject: [Numpy-discussion] Moving away from using accelerate framework on mac os x ? In-Reply-To: References: Message-ID: <501D162C.7000101@astro.uio.no> On 08/04/2012 01:14 PM, Aron Ahmadia wrote: > Hi David, > > Apple's response here is somewhat confusing, but I will add that on the > supercomputing side of things we rarely fork, as this is not > well-supported from the vendors or the hardware (it's hard enough to > performantly spawn 500,000 processes statically, doing this dynamically > becomes even more challenging). This sounds like an issue in Python > multiprocessing itself, as I guess many other Apple libraries will fail > or crash with the fork-no-exec model. OpenMP is pretty widespread in supercomputing, and so is OpenMP + multiple threads using LAPACK at the same time. This does NOT sound like any issue with multiprocessing to me. Dag > > My suggestion would be that numpy continue to integrate with Accelerate > but prefer a macports or brew supplied blas, if available. This should > probably also be filed as a wont-fix bug on the tracker so anybody who > hits the same problem knows that it's on the system side and not us. > > A > > On Sat, Aug 4, 2012 at 2:04 PM, David Cournapeau > wrote: > > Hi, > > During last PyCon, Olivier Grisel (from scikits-learn fame) and myself > looked into a nasty bug on mac os x: https://gist.github.com/2027412. > The short story is that I believe this means numpy cannot be used with > multiprocessing if linked against accelerate framework, and as such we > should think about giving up on accelerate, and use e.g. ATLAS on mac > for our official binaries. > > Long story: we recently received a answer where the engineers mention > that using blas on each 'side' of a fork is not supported. The meat of > the email is attached below > > thoughts ? > > David > > ---------- Forwarded message ---------- > From: > > Date: 2012/8/2 > Subject: Bug ID 11036478: Segfault when calling dgemm with Accelerate > / GCD after in a forked process > To: olivier.grisel at gmail.com > > > Hi Olivier, > > Thank you for contacting us regarding Bug ID# 11036478. > > Thank you for filing this bug report. > > This usage of fork() is not supported on our platform. > > For API outside of POSIX, including GCD and technologies like > Accelerate, we do not support usage on both sides of a fork(). For > this reason among others, use of fork() without exec is discouraged in > general in processes that use layers above POSIX. > > We recommend that you either restrict usage of blas to the parent or > the child process but not both, or that you switch to using GCD or > pthreads rather than forking to create parallelism. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From cournape at gmail.com Sat Aug 4 08:36:16 2012 From: cournape at gmail.com (David Cournapeau) Date: Sat, 4 Aug 2012 13:36:16 +0100 Subject: [Numpy-discussion] Moving away from using accelerate framework on mac os x ? In-Reply-To: References: Message-ID: On Sat, Aug 4, 2012 at 12:14 PM, Aron Ahmadia wrote: > Hi David, > > Apple's response here is somewhat confusing, but I will add that on the > supercomputing side of things we rarely fork, as this is not well-supported > from the vendors or the hardware (it's hard enough to performantly spawn > 500,000 processes statically, doing this dynamically becomes even more > challenging). This sounds like an issue in Python multiprocessing itself, > as I guess many other Apple libraries will fail or crash with the > fork-no-exec model. > > My suggestion would be that numpy continue to integrate with Accelerate but > prefer a macports or brew supplied blas, if available. This should probably > also be filed as a wont-fix bug on the tracker so anybody who hits the same > problem knows that it's on the system side and not us. To be clear, I am not suggesting removing support for linking against accelerate, just to go away from it for our binary releases. David From cournape at gmail.com Sat Aug 4 08:37:08 2012 From: cournape at gmail.com (David Cournapeau) Date: Sat, 4 Aug 2012 13:37:08 +0100 Subject: [Numpy-discussion] Unicode revisited In-Reply-To: <20120804115853.GA14152@sleipnir.bytereef.org> References: <20120804104209.GA13393@sleipnir.bytereef.org> <20120804115853.GA14152@sleipnir.bytereef.org> Message-ID: On Sat, Aug 4, 2012 at 12:58 PM, Stefan Krah wrote: > Nathaniel Smith wrote: >> On Sat, Aug 4, 2012 at 11:42 AM, Stefan Krah wrote: >> > switch (descr->byteorder) { >> > case '<': >> > byteorder = -1; >> > case '>': >> > byteorder = 1; >> > default: /* '=', '|' */ >> > byteorder = 0; >> > } >> >> I think you might want some breaks in here... > > Indeed. Shame on me for posting quick-and-dirty code. Maybe we should unit-testing our email too :) David From aron at ahmadia.net Sat Aug 4 09:45:58 2012 From: aron at ahmadia.net (Aron Ahmadia) Date: Sat, 4 Aug 2012 16:45:58 +0300 Subject: [Numpy-discussion] Moving away from using accelerate framework on mac os x ? In-Reply-To: <501D162C.7000101@astro.uio.no> References: <501D162C.7000101@astro.uio.no> Message-ID: Responding to both Dag and David, > OpenMP is pretty widespread in supercomputing, and so is OpenMP + > multiple threads using LAPACK at the same time. This does NOT sound like > any issue with multiprocessing to me. > These are built on dynamic thread-level (pthreads), not process-level (multiprocessing) parallelism. My original statement stands. | To be clear, I am not suggesting removing support for linking against | accelerate, just to go away from it for our binary releases. Oops, I missed that last part of your first email. If this is a problem for people, I agree that the binaries should avoid Accelerate. A -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej.certik at gmail.com Sat Aug 4 14:14:03 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Sat, 4 Aug 2012 11:14:03 -0700 Subject: [Numpy-discussion] Status of NumPy and Python 3.3 In-Reply-To: References: <20120728093622.GA27387@sleipnir.bytereef.org> <20120728181956.GA30702@sleipnir.bytereef.org> <1343597223.2223.131.camel@ronan-desktop> <1343610023.2223.156.camel@ronan-desktop> <1343620667.2223.160.camel@ronan-desktop> <1343664653.2223.166.camel@ronan-desktop> <1343667848.2223.167.camel@ronan-desktop> <1343692831.2223.171.camel@ronan-desktop> Message-ID: On Fri, Aug 3, 2012 at 8:03 AM, Ond?ej ?ert?k wrote: > On Mon, Jul 30, 2012 at 5:00 PM, Ronan Lamy wrote: >> Le lundi 30 juillet 2012 ? 11:07 -0700, Ond?ej ?ert?k a ?crit : >>> On Mon, Jul 30, 2012 at 10:04 AM, Ronan Lamy wrote: >>> > Le lundi 30 juillet 2012 ? 17:10 +0100, Ronan Lamy a ?crit : >>> >> Le lundi 30 juillet 2012 ? 04:57 +0100, Ronan Lamy a ?crit : >>> >> > Le lundi 30 juillet 2012 ? 02:00 +0100, Ronan Lamy a ?crit : >>> >> > >>> >> > > >>> >> > > Anyway, I managed to compile (by blanking >>> >> > > numpy/distutils/command/__init__.py) and to run the tests. I only see >>> >> > > the 2 pickle errors from your latest gist. So that's all good! >>> >> > >>> >> > And the cause of these errors is that running the test suite somehow >>> >> > corrupts Python's internal cache of bytes objects, causing the >>> >> > following: >>> >> > >>> b'\x01XXX'[0:1] >>> >> > b'\xbb' >>> >> >>> >> The culprit is test_pickle_string_overwrite() in test_regression.py. The >>> >> test actually tries to check for that kind of problem, but on Python 3, >>> >> it only manages to trigger it without detecting it. Here's a simple way >>> >> to reproduce the issue: >>> >> >>> >> >>> a = numpy.array([1], 'b') >>> >> >>> b = pickle.loads(pickle.dumps(a)) >>> >> >>> b[0] = 77 >>> >> >>> b'\x01 '[0:1] >>> >> b'M' >>> >> >>> >> Actually, this problem is probably quite old: I can see it in 1.6.1 w/ >>> >> Python 3.2.3. 3.3 only makes it more visible. >>> >> >>> >> I'll open an issue on GitHub ASAP. >>> >> >>> > https://github.com/numpy/numpy/issues/370 >>> >>> Thanks Ronan, nice work! >>> >>> Since you looked into this -- do you know a way to fix this? (Both >>> NumPy and the test.) >> >> Pauli found out how to fix the code, so I'll try to send a PR tonight. > > > So this PR is now in and the issue is fixed. > > As far as swapping the unicode issues, I finally understand what is > going on and I posted my current understanding into the Python tracker > issue (http://bugs.python.org/issue15540) which was recently created > for this same issue: > > http://bugs.python.org/msg167280 > > but it was determined that it is not a bug in Python so it is closed > now. Finally, I have submitted a reworked version of my patch here: > > https://github.com/numpy/numpy/pull/372 > > It implements things in a clean way. Final update: the patch is in, so NumPy now passes all tests in Python 3.3. There seems to be a better way to support unicode and that is discussed in another thread. Ondrej From ralf.gommers at gmail.com Sun Aug 5 12:31:03 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 5 Aug 2012 18:31:03 +0200 Subject: [Numpy-discussion] Moving away from using accelerate framework on mac os x ? In-Reply-To: References: Message-ID: On Sat, Aug 4, 2012 at 2:36 PM, David Cournapeau wrote: > On Sat, Aug 4, 2012 at 12:14 PM, Aron Ahmadia wrote: > > Hi David, > > > > Apple's response here is somewhat confusing, but I will add that on the > > supercomputing side of things we rarely fork, as this is not > well-supported > > from the vendors or the hardware (it's hard enough to performantly spawn > > 500,000 processes statically, doing this dynamically becomes even more > > challenging). This sounds like an issue in Python multiprocessing > itself, > > as I guess many other Apple libraries will fail or crash with the > > fork-no-exec model. > > > > My suggestion would be that numpy continue to integrate with Accelerate > but > > prefer a macports or brew supplied blas, if available. This should > probably > > also be filed as a wont-fix bug on the tracker so anybody who hits the > same > > problem knows that it's on the system side and not us. > > To be clear, I am not suggesting removing support for linking against > accelerate, just to go away from it for our binary releases. Would there be any issues when mixing for example a numpy built against ATLAS with a scipy built against Accelerate? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From x.piter at gmail.com Mon Aug 6 05:04:50 2012 From: x.piter at gmail.com (Petro) Date: Mon, 06 Aug 2012 11:04:50 +0200 Subject: [Numpy-discussion] how to uninstall numpy Message-ID: <87ehnktmq5.fsf@cica.cica> Hi list This is a general python question but I will ask it here. To install a new numpy on Debian testing I remove installed version with "aptitude purge python-numpy" download numpy source code and install numpy with "sudo python setup.py install". If I want to remove the installed numpy how do I proceed? Thanks. Petro From sturla at molden.no Mon Aug 6 05:20:03 2012 From: sturla at molden.no (Sturla Molden) Date: Mon, 6 Aug 2012 11:20:03 +0200 Subject: [Numpy-discussion] Licensing question In-Reply-To: <49821359-DF1F-4CDF-841C-26F314D20E28@continuum.io> References: <20120802204453.GI363@quagmire.local> <49821359-DF1F-4CDF-841C-26F314D20E28@continuum.io> Message-ID: <02567B46-AB92-42E3-87D9-02A496D39357@molden.no> But the Fortran FFTPACK is GPL, or has the licence been changed? http://people.sc.fsu.edu/~jburkardt/f77_src/fftpack5.1/fftpack5.1.html Sturla Sendt fra min iPad Den 3. aug. 2012 kl. 07:52 skrev Travis Oliphant : > This should be completely fine. The fftpack.h file indicates that fftpack code came from Tela originally anyway and was translated from the Fortran code FFTPACK. > > Good luck with your project. > > -Travis > > > On Aug 2, 2012, at 3:44 PM, Damon McDougall wrote: > >> Hi, >> >> I have a question about the licence for NumPy's codebase. I am currently >> writing a library and I'd like to release under some BSD-type licence. >> Unfortunately, my choice to link against MIT's FFTW library (released >> under the GPL) means that, in its current state, this is not possible. >> I'm an avid NumPy user and thought to myself that, since NumPy's licence >> is BSD, I'd be able to use some of the source code (with due credit, of >> course) instead of FFTW. Is this possible? I mean, can I redistribute >> *PART* of NumPy's codebase? Namely, the fftpack.c file? I was under the >> impression that I could only redistribute BSD source code as a whole and >> then I read the licence more carefully and it states that I can modify >> the source to suit my needs. I consider 'redistributing a single file >> and ignoring the other files' as a 'modification' under the BSD >> definition, but maybe I'm thinking too wishfully here. >> >> Any information on this matter would be greatly appreciated since I am a >> total code licence noob. >> >> Thank you. >> >> P.S. Yes, I know I could just release under the GPL, but I don't want to >> turn people off of packaging my work into a useful product licensed >> under BSD, or even make money from it. >> >> -- >> Damon McDougall >> http://damon-is-a-geek.com >> B2.39 >> Mathematics Institute >> University of Warwick >> Coventry >> West Midlands >> CV4 7AL >> United Kingdom >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From scott.sinclair.za at gmail.com Mon Aug 6 05:48:45 2012 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Mon, 6 Aug 2012 11:48:45 +0200 Subject: [Numpy-discussion] how to uninstall numpy In-Reply-To: <87ehnktmq5.fsf@cica.cica> References: <87ehnktmq5.fsf@cica.cica> Message-ID: On 6 August 2012 11:04, Petro wrote: > This is a general python question but I will ask it here. To > install a new numpy on Debian testing I remove installed version with > "aptitude purge python-numpy" download numpy source code and install > numpy with "sudo python setup.py install". If I want to remove the installed > numpy how do I proceed? Assuming your system Python is 2.7, your numpy should have been installed in /usr/local/lib/python2.7/site-packages/ (or /usr/local/lib/python2.7/dist-packages/ as on Ubuntu?) So something along these lines: $ sudo rm -rf /usr/local/lib/python2.7/site-packages/numpy/ $ sudo rm -rf /usr/local/lib/python2.7/site-packages/numpy-*.egg* $ sudo rm -rf /usr/local/bin/f2py Cheers, Scott From aclark at aclark.net Mon Aug 6 14:07:55 2012 From: aclark at aclark.net (Alex Clark) Date: Mon, 06 Aug 2012 14:07:55 -0400 Subject: [Numpy-discussion] how to uninstall numpy In-Reply-To: References: <87ehnktmq5.fsf@cica.cica> Message-ID: On 8/6/12 5:48 AM, Scott Sinclair wrote: > On 6 August 2012 11:04, Petro wrote: >> This is a general python question but I will ask it here. To >> install a new numpy on Debian testing I remove installed version with >> "aptitude purge python-numpy" download numpy source code and install >> numpy with "sudo python setup.py install". If I want to remove the installed >> numpy how do I proceed? > > Assuming your system Python is 2.7, your numpy should have been > installed in /usr/local/lib/python2.7/site-packages/ (or > /usr/local/lib/python2.7/dist-packages/ as on Ubuntu?) > > So something along these lines: > > $ sudo rm -rf /usr/local/lib/python2.7/site-packages/numpy/ > $ sudo rm -rf /usr/local/lib/python2.7/site-packages/numpy-*.egg* > $ sudo rm -rf /usr/local/bin/f2py Or if you have pip installed (easy_install pip) you can: $ pip uninstall numpy (it will uninstall things it hasn't installed, which I think should include the console_script f2py?) Alex > > Cheers, > Scott > -- Alex Clark ? http://pythonpackages.com/ONE_CLICK From robert.kern at gmail.com Mon Aug 6 15:31:12 2012 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 6 Aug 2012 21:31:12 +0200 Subject: [Numpy-discussion] Licensing question In-Reply-To: <02567B46-AB92-42E3-87D9-02A496D39357@molden.no> References: <20120802204453.GI363@quagmire.local> <49821359-DF1F-4CDF-841C-26F314D20E28@continuum.io> <02567B46-AB92-42E3-87D9-02A496D39357@molden.no> Message-ID: Those are not the original Fortran sources. The original Fortran sources are in the public domain as work done by a US federal employee. http://www.netlib.org/fftpack/ Never trust the license of any code on John Burkardt's site. Track it down to the original sources. On Monday, August 6, 2012, Sturla Molden wrote: > But the Fortran FFTPACK is GPL, or has the licence been changed? > > http://people.sc.fsu.edu/~jburkardt/f77_src/fftpack5.1/fftpack5.1.html > > Sturla > > Sendt fra min iPad > > Den 3. aug. 2012 kl. 07:52 skrev Travis Oliphant > >: > > > This should be completely fine. The fftpack.h file indicates that > fftpack code came from Tela originally anyway and was translated from the > Fortran code FFTPACK. > > > > Good luck with your project. > > > > -Travis > > > > > > On Aug 2, 2012, at 3:44 PM, Damon McDougall wrote: > > > >> Hi, > >> > >> I have a question about the licence for NumPy's codebase. I am currently > >> writing a library and I'd like to release under some BSD-type licence. > >> Unfortunately, my choice to link against MIT's FFTW library (released > >> under the GPL) means that, in its current state, this is not possible. > >> I'm an avid NumPy user and thought to myself that, since NumPy's licence > >> is BSD, I'd be able to use some of the source code (with due credit, of > >> course) instead of FFTW. Is this possible? I mean, can I redistribute > >> *PART* of NumPy's codebase? Namely, the fftpack.c file? I was under the > >> impression that I could only redistribute BSD source code as a whole and > >> then I read the licence more carefully and it states that I can modify > >> the source to suit my needs. I consider 'redistributing a single file > >> and ignoring the other files' as a 'modification' under the BSD > >> definition, but maybe I'm thinking too wishfully here. > >> > >> Any information on this matter would be greatly appreciated since I am a > >> total code licence noob. > >> > >> Thank you. > >> > >> P.S. Yes, I know I could just release under the GPL, but I don't want to > >> turn people off of packaging my work into a useful product licensed > >> under BSD, or even make money from it. > >> > >> -- > >> Damon McDougall > >> http://damon-is-a-geek.com > >> B2.39 > >> Mathematics Institute > >> University of Warwick > >> Coventry > >> West Midlands > >> CV4 7AL > >> United Kingdom > >> _______________________________________________ > >> NumPy-Discussion mailing list > >> NumPy-Discussion at scipy.org > >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.p.krauss at gmail.com Mon Aug 6 23:51:46 2012 From: thomas.p.krauss at gmail.com (Tom Krauss) Date: Mon, 6 Aug 2012 22:51:46 -0500 Subject: [Numpy-discussion] question about scipy superpack Message-ID: Hi, I got a new job, and a new mac book pro on which I just installed Mac OS X 10.8. I need to run SWIG to generate a shared object from C++ source that works with numpy.i. I'm considering installing the Scipy Superpack, but I have a question. If I install the Scipy Superpack, which has most of the packages I need, plus some others, will it be able to find "numpy/arrayobject.h" or do I need to install numpy source and build it myself? In other words, does "numpy-1.8.0.dev_f2f0ac0_20120725-py2.7-macosx-10.8-x86_64.egg" have the source files needed by gcc to compile the swig-generated C++ wrapper? Cheers, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Tue Aug 7 02:15:39 2012 From: andyfaff at gmail.com (Andrew Nelson) Date: Tue, 7 Aug 2012 16:15:39 +1000 Subject: [Numpy-discussion] building numpy 1.6.2 on OSX 10.6 / Python2.7.3 Message-ID: Dear list, I am trying to build numpy 1.6.2 from source but am running up against a few problems. Platform: OSX10.6.8 Python: 2.7.3 (compiled using gcc 4.2.1) gcc: 4.2.1 gfortran: 4.2.1 I try the normal build sequence: python setup.py build sudo python setup.py install However, when I try to import numpy I get: >>> import numpy Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/__init__.py", line 137, in import add_newdocs File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/add_newdocs.py", line 9, in from numpy.lib import add_newdoc File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/__init__.py", line 4, in from type_check import * File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/type_check.py", line 8, in import numpy.core.numeric as _nx File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/__init__.py", line 5, in import multiarray ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so, 2): Symbol not found: _npy_ceil Referenced from: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so Expected in: flat namespace in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so The numpy source was from the Sourceforge official page. When I run nm on the multiarray module I get: %nm /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so ....... U _npy_ceil U _npy_double_to_half U _npy_doublebits_to_halfbits U _npy_float_to_half U _npy_floatbits_to_halfbits U _npy_half_isnan U _npy_half_iszero U _npy_half_le U _npy_half_lt_nonan U _npy_half_to_double U _npy_half_to_float U _npy_halfbits_to_doublebits U _npy_halfbits_to_floatbits So it seems that the _npy_ceil symbol is undefined. I looked at /build/src.macosx-10.6-intel-2.7/numpy/core/include/numpy/config.h and it contains: #define HAVE_CEIL Am I doing something wrong? regards, Andrew -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.sinclair.za at gmail.com Tue Aug 7 03:19:40 2012 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Tue, 7 Aug 2012 09:19:40 +0200 Subject: [Numpy-discussion] how to uninstall numpy In-Reply-To: References: <87ehnktmq5.fsf@cica.cica> Message-ID: On 6 August 2012 20:07, Alex Clark wrote: > On 8/6/12 5:48 AM, Scott Sinclair wrote: >> On 6 August 2012 11:04, Petro wrote: >>> This is a general python question but I will ask it here. To >>> install a new numpy on Debian testing I remove installed version with >>> "aptitude purge python-numpy" download numpy source code and install >>> numpy with "sudo python setup.py install". If I want to remove the installed >>> numpy how do I proceed? >> >> Assuming your system Python is 2.7, your numpy should have been >> installed in /usr/local/lib/python2.7/site-packages/ (or >> /usr/local/lib/python2.7/dist-packages/ as on Ubuntu?) >> >> So something along these lines: >> >> $ sudo rm -rf /usr/local/lib/python2.7/site-packages/numpy/ >> $ sudo rm -rf /usr/local/lib/python2.7/site-packages/numpy-*.egg* >> $ sudo rm -rf /usr/local/bin/f2py > > > Or if you have pip installed (easy_install pip) you can: > > $ pip uninstall numpy > > (it will uninstall things it hasn't installed, which I think should > include the console_script f2py?) Unfortunately that won't work in this case. If pip wasn't used to install the package it has no way know what's been installed. That information is stored in "site-packages/package-ver-pyver.egg-info/installed-files.txt" which doesn't exist if pip isn't used for the install. Cheers, Scott From pgmdevlist at gmail.com Tue Aug 7 08:00:58 2012 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 7 Aug 2012 14:00:58 +0200 Subject: [Numpy-discussion] building numpy 1.6.2 on OSX 10.6 / Python2.7.3 In-Reply-To: References: Message-ID: Andrew, I'm afraid you did. It's generally considered a "very bad idea"(?) to install NumPy on a recent OSX system without specifying a destination. By default, the process will try to install on /Library/Frameworks/Python, overwriting the pre-installed version of NumPy that comes with your machine. You probably don't want to do that. However, using either the --user flag or a virtual environment ( http://www.virtualenv.org/ ) works pretty well. EG `python setup.py install --user` should install bumpy in a ~/.local directory, you'll just have to update your PYTHONPATH Good luck -- Pierre GM On Tuesday, August 7, 2012 at 08:15 , Andrew Nelson wrote: Dear list, I am trying to build numpy 1.6.2 from source but am running up against a few problems. Platform: OSX10.6.8 Python: 2.7.3 (compiled using gcc 4.2.1) gcc: 4.2.1 gfortran: 4.2.1 I try the normal build sequence: python setup.py build sudo python setup.py install However, when I try to import numpy I get: >>> import numpy Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/__init__.py", line 137, in import add_newdocs File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/add_newdocs.py", line 9, in from numpy.lib import add_newdoc File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/__init__.py", line 4, in from type_check import * File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/type_check.py", line 8, in import numpy.core.numeric as _nx File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/__init__.py", line 5, in import multiarray ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so, 2): Symbol not found: _npy_ceil Referenced from: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so Expected in: flat namespace in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so The numpy source was from the Sourceforge official page. When I run nm on the multiarray module I get: %nm /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so ....... U _npy_ceil U _npy_double_to_half U _npy_doublebits_to_halfbits U _npy_float_to_half U _npy_floatbits_to_halfbits U _npy_half_isnan U _npy_half_iszero U _npy_half_le U _npy_half_lt_nonan U _npy_half_to_double U _npy_half_to_float U _npy_halfbits_to_doublebits U _npy_halfbits_to_floatbits So it seems that the _npy_ceil symbol is undefined. I looked at /build/src.macosx-10.6-intel-2.7/numpy/core/include/numpy/config.h and it contains: #define HAVE_CEIL Am I doing something wrong? regards, Andrew -- _____________________________________ Dr. Andrew Nelson _____________________________________ _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmagosta at us.toyota-itc.com Tue Aug 7 10:48:04 2012 From: jmagosta at us.toyota-itc.com (John Mark Agosta) Date: Tue, 7 Aug 2012 10:48:04 -0400 Subject: [Numpy-discussion] how to uninstall numpy In-Reply-To: References: <87ehnktmq5.fsf@cica.cica> Message-ID: Here's a good article on the vagaries of python paths when installing a new python. Thus you can check exactly how python finds its modules, to assure the new install is working properly: https://www.usenix.org/publications/login/august-2012-volume-37-number-4/import John Mark Agosta jmagosta at us.toyota-itc.com TOYOTA InfoTechnology Center USA www.us.toyota-itc.com 465 Bernardo Avenue, Mountain View, CA 94043 Phone: (650) 694-4150 Fax: (650) 694-4901 On Aug 6, 2012, at 2:07 PM, Alex Clark wrote: > On 8/6/12 5:48 AM, Scott Sinclair wrote: >> On 6 August 2012 11:04, Petro wrote: >>> This is a general python question but I will ask it here. To >>> install a new numpy on Debian testing I remove installed version with >>> "aptitude purge python-numpy" download numpy source code and install >>> numpy with "sudo python setup.py install". If I want to remove the installed >>> numpy how do I proceed? >> >> Assuming your system Python is 2.7, your numpy should have been >> installed in /usr/local/lib/python2.7/site-packages/ (or >> /usr/local/lib/python2.7/dist-packages/ as on Ubuntu?) >> >> So something along these lines: >> >> $ sudo rm -rf /usr/local/lib/python2.7/site-packages/numpy/ >> $ sudo rm -rf /usr/local/lib/python2.7/site-packages/numpy-*.egg* >> $ sudo rm -rf /usr/local/bin/f2py > > > Or if you have pip installed (easy_install pip) you can: > > $ pip uninstall numpy > > (it will uninstall things it hasn't installed, which I think should > include the console_script f2py?) > > > Alex > > >> >> Cheers, >> Scott >> > > > -- > Alex Clark ? http://pythonpackages.com/ONE_CLICK > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Tue Aug 7 11:35:42 2012 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 7 Aug 2012 08:35:42 -0700 Subject: [Numpy-discussion] question about scipy superpack In-Reply-To: References: Message-ID: On Mon, Aug 6, 2012 at 8:51 PM, Tom Krauss wrote: > I got a new job, and a new mac book pro on which I just installed Mac OS X > 10.8. congrats -- on the job, and on an employer that gets you a mac! > I need to run SWIG to generate a shared object from C++ source that works > with numpy.i. I'm considering installing the Scipy Superpack, but I have a > question. If I install the Scipy Superpack, which has most of the packages > I need, plus some others, will it be able to find "numpy/arrayobject.h" It's probably there, yes, and you should be able to find it with: numpy.get_include() (use that in your setup.py) > the source files needed by gcc to compile the swig-generated C++ wrapper? The trick here is which gcc -- Apple is fast to move forward, is on the bleeding edge with gcc -- the latest XCode uses LLVM, which is not compatible with older Python builds. I *think* the superpack is build against the pyton.org python builds (32 bit?) Anyway, the python,org 32 bit build requires an older gcc for building extensions -- you can get XCode 3from Apple Developer connection if you dig for it -- it works fine on 10.7, I hope it does on 10.8. I'm not totally sure about the 32/64 bit Intel build. The pythonmac list will be a help here. Good luck, -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From chris.barker at noaa.gov Tue Aug 7 11:40:41 2012 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 7 Aug 2012 08:40:41 -0700 Subject: [Numpy-discussion] building numpy 1.6.2 on OSX 10.6 / Python2.7.3 In-Reply-To: References: Message-ID: On Tue, Aug 7, 2012 at 5:00 AM, Pierre GM wrote: > It's generally considered a "very bad idea"(?) to install NumPy on a recent > OSX system without specifying a destination. By default, the process will > try to install on /Library/Frameworks/Python, overwriting the pre-installed > version of NumPy that comes with your machine. Indeed, you want to be careful about this, but Apple puts theirs in: /System/Library/Frameworks/Python.framework/ /Library/Frameworks... Is the default location for Python.org builds -- and a fine place to put it. (though you might have clashes if you try to install binaries build for the python.org builds But I wonder if this has anything to do with the OP's problem anyway... Sorry I'm not more help -- I've managed to avoid building python myself so far. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From totonixsame at gmail.com Tue Aug 7 12:05:24 2012 From: totonixsame at gmail.com (Thiago Franco Moraes) Date: Tue, 7 Aug 2012 13:05:24 -0300 Subject: [Numpy-discussion] question about scipy superpack In-Reply-To: References: Message-ID: A little off-topic, but related: Which python version do you recommend to install in Mac OS X 10.8? The native one? The one from python.org? or the one compiled via homebrew? And do you think it's better to use the 32 or 64 bits? Thanks! On Tue, Aug 7, 2012 at 12:35 PM, Chris Barker wrote: > On Mon, Aug 6, 2012 at 8:51 PM, Tom Krauss wrote: >> I got a new job, and a new mac book pro on which I just installed Mac OS X >> 10.8. > > congrats -- on the job, and on an employer that gets you a mac! > >> I need to run SWIG to generate a shared object from C++ source that works >> with numpy.i. I'm considering installing the Scipy Superpack, but I have a >> question. If I install the Scipy Superpack, which has most of the packages >> I need, plus some others, will it be able to find "numpy/arrayobject.h" > > It's probably there, yes, and you should be able to find it with: > > numpy.get_include() > > (use that in your setup.py) > >> the source files needed by gcc to compile the swig-generated C++ wrapper? > > The trick here is which gcc -- Apple is fast to move forward, is on > the bleeding edge with gcc -- the latest XCode uses LLVM, which is not > compatible with older Python builds. > > I *think* the superpack is build against the pyton.org python builds (32 bit?) > > Anyway, the python,org 32 bit build requires an older gcc for building > extensions -- you can get XCode 3from Apple Developer connection if > you dig for it -- it works fine on 10.7, I hope it does on 10.8. > > I'm not totally sure about the 32/64 bit Intel build. > > The pythonmac list will be a help here. > > Good luck, > > -Chris > > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From ralf.gommers at gmail.com Tue Aug 7 14:04:18 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 7 Aug 2012 20:04:18 +0200 Subject: [Numpy-discussion] question about scipy superpack In-Reply-To: References: Message-ID: On Tue, Aug 7, 2012 at 5:35 PM, Chris Barker wrote: > On Mon, Aug 6, 2012 at 8:51 PM, Tom Krauss > wrote: > > I got a new job, and a new mac book pro on which I just installed Mac OS > X > > 10.8. > > congrats -- on the job, and on an employer that gets you a mac! > > > I need to run SWIG to generate a shared object from C++ source that works > > with numpy.i. I'm considering installing the Scipy Superpack, but I > have a > > question. If I install the Scipy Superpack, which has most of the > packages > > I need, plus some others, will it be able to find "numpy/arrayobject.h" > > It's probably there, yes, and you should be able to find it with: > > numpy.get_include() > > (use that in your setup.py) > > > the source files needed by gcc to compile the swig-generated C++ wrapper? > > The trick here is which gcc -- Apple is fast to move forward, is on > the bleeding edge with gcc -- the latest XCode uses LLVM, which is not > compatible with older Python builds. > > I *think* the superpack is build against the pyton.org python builds (32 > bit?) > No, it says at http://fonnesbeck.github.com/ScipySuperpack/ that it's built against 64-bit Apple Python. Ralf > Anyway, the python,org 32 bit build requires an older gcc for building > extensions -- you can get XCode 3from Apple Developer connection if > you dig for it -- it works fine on 10.7, I hope it does on 10.8. > > I'm not totally sure about the 32/64 bit Intel build. > > The pythonmac list will be a help here. > > Good luck, > > -Chris > > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Aug 7 14:07:35 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 7 Aug 2012 20:07:35 +0200 Subject: [Numpy-discussion] question about scipy superpack In-Reply-To: References: Message-ID: On Tue, Aug 7, 2012 at 6:05 PM, Thiago Franco Moraes wrote: > A little off-topic, but related: Which python version do you recommend > to install in Mac OS X 10.8? The native one? The one from python.org? > or the one compiled via homebrew? And do you think it's better to use > the 32 or 64 bits? > Depends on what you want to do / what packages you want to use. Perhaps python.org + official installers (dmgs from Sourceforge), perhaps EPD / SciPy Superpack / .... Without knowing more, I would just advise to not use Apple Python, and to use binary installers (10.8 is so fresh, you'll likely run into a few issues with source installs). Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.p.krauss at gmail.com Tue Aug 7 14:24:09 2012 From: thomas.p.krauss at gmail.com (Tom Krauss) Date: Tue, 7 Aug 2012 13:24:09 -0500 Subject: [Numpy-discussion] question about scipy superpack In-Reply-To: References: Message-ID: I'm on 10.8 and am using the Apple Mac OS X Mountain Lion python (2.7.2). Here's what I ended up doing, FWIW: - I installed "pip" (sudu easy_install pip) - I installed virtualenv - created a new virtual environment [recommended since superpack installs a bunch of development versions of the packages and updates fairly often] - ran the scipy superpack install script - it installed DateUtils 0.5.2, which is way old - I removed it and installed 1.5 instead (with easy_install python-dateutil==1.5) Result: ipython and SWIG are now running just fine for my code, but I got some errors in the scipy tests which I need to follow up on. Also, I got a message that gfortran failed to install because I didn't sudo since I thought I didn't need to because I was installing to a virtual environment. Not sure if scipy errors are related to gfortran missing. Thanks to Mr. Chris Fonnesbeck for publishing the Scipy Superpack, you saved me a ton of time! The answer to my specific question is that yes, the arrayobject.h header is included in the numpy egg, which is easy to see since eggs are really just directories (last night I thought they were some kind of binary)! Further note, I had to change ipython's "pylab" setting to "osx". On Tue, Aug 7, 2012 at 1:07 PM, Ralf Gommers wrote: > > > On Tue, Aug 7, 2012 at 6:05 PM, Thiago Franco Moraes < > totonixsame at gmail.com> wrote: > >> A little off-topic, but related: Which python version do you recommend >> to install in Mac OS X 10.8? The native one? The one from python.org? >> or the one compiled via homebrew? And do you think it's better to use >> the 32 or 64 bits? >> > > Depends on what you want to do / what packages you want to use. Perhaps > python.org + official installers (dmgs from Sourceforge), perhaps EPD / > SciPy Superpack / .... > > Without knowing more, I would just advise to not use Apple Python, and to > use binary installers (10.8 is so fresh, you'll likely run into a few > issues with source installs). > > Ralf > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Aug 7 19:34:16 2012 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 8 Aug 2012 00:34:16 +0100 Subject: [Numpy-discussion] Indexing API In-Reply-To: <33C105EF-9C05-45D2-80E5-7B49F8F76F09@continuum.io> References: <33C105EF-9C05-45D2-80E5-7B49F8F76F09@continuum.io> Message-ID: On Thu, Jul 19, 2012 at 2:53 PM, Travis Oliphant wrote: > > On Jul 19, 2012, at 3:50 AM, Nathaniel Smith wrote: > >> So the underlying problem with the controversial inplace_increment >> PR[1] is that currently, there's actually nothing in the public numpy >> API that exposes the workings of numpy indexing. The only thing you >> can do with a numpy index is call ndarray.__getattr__ or __setattr__. >> This is a pretty obvious gap, given how fundamental an operation >> indexing is in numpy (and how difficult to emulate). So how can we >> expose something that fixes it? Make PyArrayMapIterObject part of the >> public API? Something else? > > I think you meant ndarray.__getitem__ and ndarray.__setitem__ > > As I mentioned in the comments, the original intention was to make PyArrayMapIterObject part of the public API. However, I was not able to make it work in the way I had intended back then. > > Exposing the MapIterObject is a good idea (but it would have to be exposed already bound to an array) --- i.e. you create a new API that binds to a particular array and then expose the PyArray_MapIterNext, etc. functions. > > Perhaps something like: PyArray_MapIterArray There's now a PR for exposing this: https://github.com/numpy/numpy/pull/377 Since this is new API I hope people will take a look :-). The patch itself is pretty trivial, but it exposes an object that seems to have been only partially implemented, so we should also double-check that this isn't exposing any half-baked code. (mapping.c still says "Do not expose the MapIter_Type to Python", but I'm not really clear what the problems are. AFAICT it doesn't actually define *any* Python-accessible API, it's just an opaque object.) -n From njs at pobox.com Tue Aug 7 19:55:32 2012 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 8 Aug 2012 00:55:32 +0100 Subject: [Numpy-discussion] Licensing question In-Reply-To: References: <20120802204453.GI363@quagmire.local> <49821359-DF1F-4CDF-841C-26F314D20E28@continuum.io> <02567B46-AB92-42E3-87D9-02A496D39357@molden.no> Message-ID: On Mon, Aug 6, 2012 at 8:31 PM, Robert Kern wrote: > Those are not the original Fortran sources. The original Fortran sources are > in the public domain as work done by a US federal employee. > > http://www.netlib.org/fftpack/ > > Never trust the license of any code on John Burkardt's site. Track it down > to the original sources. Taken together, what those websites seem to be claiming is that you have a choice of buggy BSD code or fixed GPL code? I assume someone has already taken the appropriate measures for numpy, but it seems like an unfortunate situation... -n From andyfaff at gmail.com Wed Aug 8 01:15:45 2012 From: andyfaff at gmail.com (Andrew Nelson) Date: Wed, 8 Aug 2012 15:15:45 +1000 Subject: [Numpy-discussion] building numpy 1.6.2 on OSX 10.6 / Python2.7.3 Message-ID: Dear Pierre, as indicated yesterday OSX system python is in: /System/Library/Frameworks/Python.framework/ I am installing into: /Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/site-packages This should not present a problem and does not explain why numpy does not build/import correctly on my setup. regards, Andrew. > Date: Tue, 7 Aug 2012 14:00:58 +0200 > From: Pierre GM > Subject: Re: [Numpy-discussion] building numpy 1.6.2 on OSX 10.6 / > Python2.7.3 > > Andrew, > > I'm afraid you did. > > It's generally considered a "very bad idea"(?) to install NumPy on a recent > OSX system without specifying a destination. By default, the process will > try to install on /Library/Frameworks/Python, overwriting the pre-installed > version of NumPy that comes with your machine. You probably don't want to > do that. > > However, using either the --user flag or a virtual environment ( > http://www.virtualenv.org/ ) works > pretty well. EG > > `python setup.py install --user` should install bumpy in a ~/.local > directory, you'll just have to update your PYTHONPATH > > Good luck > > -- > > Pierre GM > > On Tuesday, August 7, 2012 at 08:15 , Andrew Nelson wrote: > > Dear list, > > I am trying to build numpy 1.6.2 from source but am running up against a > few problems. > > Platform: OSX10.6.8 > > Python: 2.7.3 (compiled using gcc 4.2.1) > > gcc: 4.2.1 > > gfortran: 4.2.1 > > I try the normal build sequence: > > python setup.py build > > sudo python setup.py install > > However, when I try to import numpy I get: > > >>> import numpy > > Traceback (most recent call last): > > File "", line 1, in > > File > > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/__init__.py", > line 137, in > > import add_newdocs > > File > > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/add_newdocs.py", > line 9, in > > from numpy.lib import add_newdoc > > File > > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/__init__.py", > line 4, in > > from type_check import * > > File > > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/type_check.py", > line 8, in > > import numpy.core.numeric as _nx > > File > > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/__init__.py", > line 5, in > > import multiarray > > ImportError: > > dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so, > 2): Symbol not found: _npy_ceil > > Referenced from: > > /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so > > Expected in: flat namespace > > in > > /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so > > The numpy source was from the Sourceforge official page. > > When I run nm on the multiarray module I get: > > %nm > /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so > > ....... > > U _npy_ceil > > U _npy_double_to_half > > U _npy_doublebits_to_halfbits > > U _npy_float_to_half > > U _npy_floatbits_to_halfbits > > U _npy_half_isnan > > U _npy_half_iszero > > U _npy_half_le > > U _npy_half_lt_nonan > > U _npy_half_to_double > > U _npy_half_to_float > > U _npy_halfbits_to_doublebits > > U _npy_halfbits_to_floatbits > > So it seems that the _npy_ceil symbol is undefined. I looked at > /build/src.macosx-10.6-intel-2.7/numpy/core/include/numpy/config.h and it > contains: > > #define HAVE_CEIL > > Am I doing something wrong? > > regards, > > Andrew > > -- > _____________________________________ > Dr. Andrew Nelson > > _____________________________________ > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/numpy-discussion/attachments/20120807/24322fc4/attachment.html > > ------------------------------ > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > End of NumPy-Discussion Digest, Vol 71, Issue 11 > ************************************************ > -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Aug 8 03:43:18 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 8 Aug 2012 08:43:18 +0100 Subject: [Numpy-discussion] building numpy 1.6.2 on OSX 10.6 / Python2.7.3 In-Reply-To: References: Message-ID: On Wed, Aug 8, 2012 at 6:15 AM, Andrew Nelson wrote: > Dear Pierre, > as indicated yesterday OSX system python is in: > > /System/Library/Frameworks/Python.framework/ > > I am installing into: > > /Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/site-packages > > This should not present a problem and does not explain why numpy does not > build/import correctly on my setup. Please give us the build log (when rebuilding from scratch to have the complete log) so that we can have a better idea of the issue, David From damon.mcdougall at gmail.com Wed Aug 8 04:37:02 2012 From: damon.mcdougall at gmail.com (Damon McDougall) Date: Wed, 8 Aug 2012 09:37:02 +0100 Subject: [Numpy-discussion] Licensing question In-Reply-To: References: <20120802204453.GI363@quagmire.local> <49821359-DF1F-4CDF-841C-26F314D20E28@continuum.io> <02567B46-AB92-42E3-87D9-02A496D39357@molden.no> Message-ID: <20120808083702.GL35755@host-57-93.warwick.ac.uk> On Wed, Aug 08, 2012 at 12:55:32AM +0100, Nathaniel Smith wrote: > On Mon, Aug 6, 2012 at 8:31 PM, Robert Kern wrote: > > Those are not the original Fortran sources. The original Fortran sources are > > in the public domain as work done by a US federal employee. > > > > http://www.netlib.org/fftpack/ > > > > Never trust the license of any code on John Burkardt's site. Track it down > > to the original sources. > > Taken together, what those websites seem to be claiming is that you > have a choice of buggy BSD code or fixed GPL code? I assume someone > has already taken the appropriate measures for numpy, but it seems > like an unfortunate situation... > > > -n Wow. I'd like to thank everyone that responded. There were some helpful suggestions. For what it's worth, I decided not to use numpy's fft code, nor libfftpack. I decided to use kissFFT instead since it has support for multidimensional transforms. However, I did decided to use numpy's random number routines. It didn't take me long to package it up and the advice given here was still useful. Thanks again. -- Damon McDougall http://damon-is-a-geek.com B2.39 Mathematics Institute University of Warwick Coventry West Midlands CV4 7AL United Kingdom From cournape at gmail.com Wed Aug 8 05:34:45 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 8 Aug 2012 10:34:45 +0100 Subject: [Numpy-discussion] Licensing question In-Reply-To: References: <20120802204453.GI363@quagmire.local> <49821359-DF1F-4CDF-841C-26F314D20E28@continuum.io> <02567B46-AB92-42E3-87D9-02A496D39357@molden.no> Message-ID: On Wed, Aug 8, 2012 at 12:55 AM, Nathaniel Smith wrote: > On Mon, Aug 6, 2012 at 8:31 PM, Robert Kern wrote: >> Those are not the original Fortran sources. The original Fortran sources are >> in the public domain as work done by a US federal employee. >> >> http://www.netlib.org/fftpack/ >> >> Never trust the license of any code on John Burkardt's site. Track it down >> to the original sources. > > Taken together, what those websites seem to be claiming is that you > have a choice of buggy BSD code or fixed GPL code? I assume someone > has already taken the appropriate measures for numpy, but it seems > like an unfortunate situation... If the code on John Burkardt website is based on the netlib codebase, he is not entitled to make it GPL unless he is the sole copyright holder of the original code. I think the 'real' solution is to have a separate package linking to FFTW for people with 'advanced' needs for FFT. None of the other library I have looked at so far are usable, fast and precise enough when you go far from the simple case of double precision and 'well factored' size. regards, David From robert.kern at gmail.com Wed Aug 8 05:53:00 2012 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 8 Aug 2012 10:53:00 +0100 Subject: [Numpy-discussion] Licensing question In-Reply-To: References: <20120802204453.GI363@quagmire.local> <49821359-DF1F-4CDF-841C-26F314D20E28@continuum.io> <02567B46-AB92-42E3-87D9-02A496D39357@molden.no> Message-ID: On Wed, Aug 8, 2012 at 10:34 AM, David Cournapeau wrote: > On Wed, Aug 8, 2012 at 12:55 AM, Nathaniel Smith wrote: >> On Mon, Aug 6, 2012 at 8:31 PM, Robert Kern wrote: >>> Those are not the original Fortran sources. The original Fortran sources are >>> in the public domain as work done by a US federal employee. >>> >>> http://www.netlib.org/fftpack/ >>> >>> Never trust the license of any code on John Burkardt's site. Track it down >>> to the original sources. >> >> Taken together, what those websites seem to be claiming is that you >> have a choice of buggy BSD code or fixed GPL code? I assume someone >> has already taken the appropriate measures for numpy, but it seems >> like an unfortunate situation... > > If the code on John Burkardt website is based on the netlib codebase, > he is not entitled to make it GPL unless he is the sole copyright > holder of the original code. He can certainly incorporate the public domain code and rerelease it under whatever restrictions he likes, especially if he adds to it, which appears to be the case. The original sources are legitimately public domain, not just released under a liberal copyright license. He can't "remove" the original code from the public domain, but that's not what he claims to have done. > I think the 'real' solution is to have a separate package linking to > FFTW for people with 'advanced' needs for FFT. None of the other > library I have looked at so far are usable, fast and precise enough > when you go far from the simple case of double precision and 'well > factored' size. http://pypi.python.org/pypi/pyFFTW -- Robert Kern From cournape at gmail.com Wed Aug 8 06:50:32 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 8 Aug 2012 11:50:32 +0100 Subject: [Numpy-discussion] Licensing question In-Reply-To: References: <20120802204453.GI363@quagmire.local> <49821359-DF1F-4CDF-841C-26F314D20E28@continuum.io> <02567B46-AB92-42E3-87D9-02A496D39357@molden.no> Message-ID: On Wed, Aug 8, 2012 at 10:53 AM, Robert Kern wrote: > On Wed, Aug 8, 2012 at 10:34 AM, David Cournapeau wrote: >> On Wed, Aug 8, 2012 at 12:55 AM, Nathaniel Smith wrote: >>> On Mon, Aug 6, 2012 at 8:31 PM, Robert Kern wrote: >>>> Those are not the original Fortran sources. The original Fortran sources are >>>> in the public domain as work done by a US federal employee. >>>> >>>> http://www.netlib.org/fftpack/ >>>> >>>> Never trust the license of any code on John Burkardt's site. Track it down >>>> to the original sources. >>> >>> Taken together, what those websites seem to be claiming is that you >>> have a choice of buggy BSD code or fixed GPL code? I assume someone >>> has already taken the appropriate measures for numpy, but it seems >>> like an unfortunate situation... >> >> If the code on John Burkardt website is based on the netlib codebase, >> he is not entitled to make it GPL unless he is the sole copyright >> holder of the original code. > > He can certainly incorporate the public domain code and rerelease it > under whatever restrictions he likes, especially if he adds to it, > which appears to be the case. The original sources are legitimately > public domain, not just released under a liberal copyright license. He > can't "remove" the original code from the public domain, but that's > not what he claims to have done. > >> I think the 'real' solution is to have a separate package linking to >> FFTW for people with 'advanced' needs for FFT. None of the other >> library I have looked at so far are usable, fast and precise enough >> when you go far from the simple case of double precision and 'well >> factored' size. > > http://pypi.python.org/pypi/pyFFTW Nice, I am starting to get out of touch with too many packages... Would be nice to add DCT and DST support to it. David From dave.hirschfeld at gmail.com Wed Aug 8 09:38:12 2012 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Wed, 8 Aug 2012 13:38:12 +0000 (UTC) Subject: [Numpy-discussion] =?utf-8?q?Bug_in_as=5Fstrided/reshape?= Message-ID: It seems that reshape doesn't work correctly on an array which has been resized using the 0-stride trick e.g. In [73]: x = array([5]) In [74]: y = as_strided(x, shape=(10,), strides=(0,)) In [75]: y Out[75]: array([5, 5, 5, 5, 5, 5, 5, 5, 5, 5]) In [76]: y.reshape([10,1]) Out[76]: array([[ 5], [ 8], [ 762933412], [-2013265919], [ 26], [ 64], [ 762933414], [-2013244356], [ 26], [ 64]]) <================ Should all be 5???????? In [77]: y.copy().reshape([10,1]) Out[77]: array([[5], [5], [5], [5], [5], [5], [5], [5], [5], [5]]) In [78]: np.__version__ Out[78]: '1.6.2' Perhaps a clause such as below is required in reshape? if any(stride == 0 for stride in y.strides): return y.copy().reshape(shape) else: return y.reshape(shape) Regards, Dave From nicolas.aunai at gmail.com Wed Aug 8 09:47:15 2012 From: nicolas.aunai at gmail.com (nicolas aunai) Date: Wed, 8 Aug 2012 09:47:15 -0400 Subject: [Numpy-discussion] nested loops too slow Message-ID: Hi, I'm trying to write a code for doing a 2D integral. It works well when I'm doing it with normal "for" loops, but requires two nested loops, and is much too slow for my application. I would like to know if it is possible to do it faster, for example with fancy indexing and the use of numpy.cumsum(). I couln't find a solution, do you have an idea ? The code is the following : http://bpaste.net/show/cAkMBd3sUmhDXq0sIpZ5/ 'flux2' is the result of the calculation with 'for' loops implementation, and 'flux' is supposed to be the same result without the loop. If I managed to do it for the single loops (line 23 is identical to lines 20-21, and line 30 is identical to line 27,28) and don't know how to do for the nested loops lines 33-35 (line 40 does not give the same result). Any idea ? Thanks much Nico From gandalf at shopzeus.com Wed Aug 8 10:19:04 2012 From: gandalf at shopzeus.com (Laszlo Nagy) Date: Wed, 08 Aug 2012 16:19:04 +0200 Subject: [Numpy-discussion] Is there a more efficient way to do this? Message-ID: <50227558.6050701@shopzeus.com> Is there a more efficient way to calculate the "slices" array below? import numpy import numpy.random # In reality, this is between 1 and 50. DIMENSIONS = 20 # In my real app, I have 100...1M data rows. ROWS = 1000 DATA = numpy.random.random_integers(0,100,(ROWS,DIMENSIONS)) # This is between 0..DIMENSIONS-1 DRILLBY = 3 # Array of row incides that orders the data by the given dimension. o = numpy.argsort(DATA[:,DRILLBY]) # Input of my task: the data ordered by the given dimension. print DATA[o,DRILLBY] #~ [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 #~ 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 #~ 2 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 #~ 4 4 4 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6 #~ .... many more things here #~ 96 96 96 97 97 97 97 97 97 97 97 97 98 98 98 98 98 98 #~ 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 100 100 #~ 100 100 100 100 100 100 100 100 100 100] # Output of my task: determine slices for the same values on the DRILLBY dimension. slices = [] prev_val = None sidx = -1 # Dimension values for the given dimension. fdv = DATA[:,DRILLBY] # Go over the rows, sorted by values of didx for oidx,rowidx in enumerate(o): val = fdv[rowidx] if val!=prev_val: if prev_val is None: prev_val = val sidx = oidx else: slices.append((prev_val,sidx,oidx)) sidx = oidx prev_val = val if (sidx>=0) and (sidx References: Message-ID: > > `python setup.py install --user` should install bumpy in a ~/.local > directory, you'll just have to update your PYTHONPATH > As of Python 2.6/3.x, ~/.local is searched after is added before the system site directories but after Python's search paths and PYTHONPATH. See PEP 370 for more details if you're curious: http://www.python.org/dev/peps/pep-0370/ A -------------- next part -------------- An HTML attachment was scrubbed... URL: From aron at ahmadia.net Wed Aug 8 10:35:03 2012 From: aron at ahmadia.net (Aron Ahmadia) Date: Wed, 8 Aug 2012 17:35:03 +0300 Subject: [Numpy-discussion] building numpy 1.6.2 on OSX 10.6 / Python2.7.3 In-Reply-To: References: Message-ID: FWIW, on my OS X.6.8 system, using a brew-installed python, and installing into /usr/local/lib, the symbols appear to be present: ? nm /usr/local/lib/python2.7/site-packages/numpy/core/multiarray.so | grep ceil U _ceil U _ceilf U _ceill 00000000000c6550 T _npy_ceil 00000000000c6340 T _npy_ceilf 00000000000c6760 T _npy_ceill I agree with Dave, we're going to need to see your build log to have a better chance at diagnosing what went wrong with the build. A -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett.olsen at gmail.com Wed Aug 8 11:41:05 2012 From: brett.olsen at gmail.com (Brett Olsen) Date: Wed, 8 Aug 2012 10:41:05 -0500 Subject: [Numpy-discussion] Is there a more efficient way to do this? In-Reply-To: <50227558.6050701@shopzeus.com> References: <50227558.6050701@shopzeus.com> Message-ID: On Wed, Aug 8, 2012 at 9:19 AM, Laszlo Nagy wrote: > Is there a more efficient way to calculate the "slices" array below? > > I do not want to make copies of DATA, because it can be huge. The > argsort is fast enough. I just need to create slices for different > dimensions. The above code works, but it does a linear time search, > implemented in pure Python code. For every iteration, Python code is > executed. For 1 million rows, this is very slow. Is there a way to > produce "slices" with numpy code? I could write C code for this, but I > would prefer to do it with mass numpy operations. > > Thanks, > > Laszlo #Code import numpy as np #rows between 100 to 1M rows = 1000 data = np.random.random_integers(0, 100, rows) def get_slices_slow(data): o = np.argsort(data) slices = [] prev_val = None sidx = -1 for oidx, rowidx in enumerate(o): val = data[rowidx] if not val == prev_val: if prev_val is None: prev_val = val sidx = oidx else: slices.append((prev_val, sidx, oidx)) sidx = oidx prev_val = val if (sidx >= 0) and (sidx < rows): slices.append((val, sidx, rows)) slices = np.array(slices, dtype=np.int64) return slices def get_slices_fast(data): nums = np.unique(data) slices = np.zeros((len(nums), 3), dtype=np.int64) slices[:,0] = nums count = 0 for i, num in enumerate(nums): count += (data == num).sum() slices[i,2] = count slices[1:,1] = slices[:-1,2] return slices def get_slices_faster(data): nums = np.unique(data) slices = np.zeros((len(nums), 3), dtype=np.int64) slices[:,0] = nums count = np.bincount(data) slices[:,2] = count.cumsum() slices[1:,1] = slices[:-1,2] return slices #Testing in ipython In [2]: (get_slices_slow(data) == get_slices_fast(data)).all() Out[2]: True In [3]: (get_slices_slow(data) == get_slices_faster(data)).all() Out[3]: True In [4]: timeit get_slices_slow(data) 100 loops, best of 3: 3.51 ms per loop In [5]: timeit get_slices_fast(data) 1000 loops, best of 3: 1.76 ms per loop In [6]: timeit get_slices_faster(data) 10000 loops, best of 3: 116 us per loop So using the fast bincount and array indexing methods gets you about a factor of 30 improvement. Even just doing the counting in a loop with good indexing will get you a factor of 2. ~Brett From matt.terry at gmail.com Wed Aug 8 13:44:43 2012 From: matt.terry at gmail.com (Matt Terry) Date: Wed, 8 Aug 2012 11:44:43 -0600 Subject: [Numpy-discussion] Licensing question In-Reply-To: References: <20120802204453.GI363@quagmire.local> <49821359-DF1F-4CDF-841C-26F314D20E28@continuum.io> <02567B46-AB92-42E3-87D9-02A496D39357@molden.no> Message-ID: > Nice, I am starting to get out of touch with too many packages... > Would be nice to add DCT and DST support to it. FWIW, the DCT has been in scipy.fftpack for a while and DST was just added. From markbak at gmail.com Wed Aug 8 16:09:45 2012 From: markbak at gmail.com (Mark Bakker) Date: Wed, 8 Aug 2012 22:09:45 +0200 Subject: [Numpy-discussion] possible bug in assignment to complex array Message-ID: Dear List, I think there is a problem with assigning a 1D complex array of length one to a position in another complex array. Example: a = ones(1,'D') b = ones(1,'D') a[0] = b --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 a[0] = b TypeError: can't convert complex to float This works correctly when a and b are real arrays: a = ones(1) b = ones(1) a[0] = b Bug or feature? Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gandalf at shopzeus.com Wed Aug 8 16:46:15 2012 From: gandalf at shopzeus.com (Laszlo Nagy) Date: Wed, 08 Aug 2012 22:46:15 +0200 Subject: [Numpy-discussion] Is there a more efficient way to do this? In-Reply-To: References: <50227558.6050701@shopzeus.com> Message-ID: <5022D017.7030600@shopzeus.com> > > In [4]: timeit get_slices_slow(data) > 100 loops, best of 3: 3.51 ms per loop > > In [5]: timeit get_slices_fast(data) > 1000 loops, best of 3: 1.76 ms per loop > > In [6]: timeit get_slices_faster(data) > 10000 loops, best of 3: 116 us per loop > > So using the fast bincount and array indexing methods gets you about a > factor of 30 improvement. Even just doing the counting in a loop with > good indexing will get you a factor of 2. Fantastic, thank you! I do no fully understand your code yet. But I'm going to read all related docs. :-) From andyfaff at gmail.com Wed Aug 8 19:47:02 2012 From: andyfaff at gmail.com (Andrew Nelson) Date: Thu, 9 Aug 2012 09:47:02 +1000 Subject: [Numpy-discussion] building numpy 1.6.2 on OSX 10.6 / Python2.7.3 Message-ID: > > > Message: 1 > Date: Wed, 8 Aug 2012 08:43:18 +0100 > From: David Cournapeau > Subject: Re: [Numpy-discussion] building numpy 1.6.2 on OSX 10.6 / > Python2.7.3 > To: Discussion of Numerical Python > Message-ID: > < > CAGY4rcW4B9krQXBibyEjeW8DZRjwBxCgV4yDZiBpWcG2zVH5VA at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > On Wed, Aug 8, 2012 at 6:15 AM, Andrew Nelson wrote: > > Dear Pierre, > > as indicated yesterday OSX system python is in: > > > > /System/Library/Frameworks/Python.framework/ > > > > I am installing into: > > > > > /Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/site-packages > > > > This should not present a problem and does not explain why numpy does not > > build/import correctly on my setup. > > Please give us the build log (when rebuilding from scratch to have the > complete log) so that we can have a better idea of the issue, > > David > > > The build log for the build that fails on my machine can be found at: http://dl.dropbox.com/u/15288921/log Examining the symbols again: p0100m:core anz$ pwd /Users/anz/Downloads/numpy-1.6.2/build/lib.macosx-10.6-intel-2.7/numpy/core p0100m:core anz$ nm multiarray.so | grep ceil U _npy_ceil U _npy_ceil -------------- next part -------------- An HTML attachment was scrubbed... URL: From icephase26 at gmail.com Thu Aug 9 04:01:31 2012 From: icephase26 at gmail.com (Florian Mueller) Date: Thu, 9 Aug 2012 10:01:31 +0200 Subject: [Numpy-discussion] nested loops too slow In-Reply-To: References: Message-ID: Hi Nico, Using for-loops for numerical calculations is often problematic for numerical calculations. I don't understand the fancy stuff with the indexing of the arrays. Can you provide a working example script and/or a brief description of the calculation you are performing (including equations)? How does the array look like? Best regards, Flo On Wed, Aug 8, 2012 at 3:47 PM, nicolas aunai wrote: > Hi, > > I'm trying to write a code for doing a 2D integral. It works well when > I'm doing it with normal "for" loops, but requires two nested loops, > and is much too slow for my application. I would like to know if it is > possible to do it faster, for example with fancy indexing and the use > of numpy.cumsum(). I couln't find a solution, do you have an idea ? > The code is the following : > > > http://bpaste.net/show/cAkMBd3sUmhDXq0sIpZ5/ > > > 'flux2' is the result of the calculation with 'for' loops > implementation, and 'flux' is supposed to be the same result without > the loop. If I managed to do it for the single loops (line 23 is > identical to lines 20-21, and line 30 is identical to line 27,28) and > don't know how to do for the nested loops lines 33-35 (line 40 does > not give the same result). > > > Any idea ? > > > Thanks much > Nico > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From x.piter at gmail.com Thu Aug 9 04:04:26 2012 From: x.piter at gmail.com (x.piter at gmail.com) Date: Thu, 09 Aug 2012 10:04:26 +0200 Subject: [Numpy-discussion] how to uninstall numpy References: <87ehnktmq5.fsf@cica.cica> Message-ID: <87mx24o5it.fsf@cica.cica> Thanks to everybody. From dave.hirschfeld at gmail.com Thu Aug 9 09:06:53 2012 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Thu, 9 Aug 2012 13:06:53 +0000 (UTC) Subject: [Numpy-discussion] =?utf-8?q?Bug_in_as=5Fstrided/reshape?= References: Message-ID: Dave Hirschfeld gmail.com> writes: > > It seems that reshape doesn't work correctly on an array which has been > resized using the 0-stride trick e.g. > > In [73]: x = array([5]) > > In [74]: y = as_strided(x, shape=(10,), strides=(0,)) > > In [75]: y > Out[75]: array([5, 5, 5, 5, 5, 5, 5, 5, 5, 5]) > > In [76]: y.reshape([10,1]) > Out[76]: > array([[ 5], > [ 8], > [ 762933412], > [-2013265919], > [ 26], > [ 64], > [ 762933414], > [-2013244356], > [ 26], > [ 64]]) <================ Should all be 5???????? > > In [77]: y.copy().reshape([10,1]) > Out[77]: > array([[5], > [5], > [5], > [5], > [5], > [5], > [5], > [5], > [5], > [5]]) > > In [78]: np.__version__ > Out[78]: '1.6.2' > > Perhaps a clause such as below is required in reshape? > > if any(stride == 0 for stride in y.strides): > return y.copy().reshape(shape) > else: > return y.reshape(shape) > > Regards, > Dave > Though it would be good to avoid the copy which you should be able to do in this case. Investigating further: In [15]: y.strides Out[15]: (0,) In [16]: z = y.reshape([10,1]) In [17]: z.strides Out[17]: (4, 4) In [18]: z.strides = (0, 4) In [19]: z Out[19]: array([[5], [5], [5], [5], [5], [5], [5], [5], [5], [5]]) In [32]: y.reshape([5, 2]) Out[32]: array([[5, 5], [5, 5], [5, 5], [5, 5], [5, 5]]) In [33]: y.reshape([5, 2]).strides Out[33]: (0, 0) So it seems that reshape is incorrectly setting the stride of axis0 to 4, but only when the appended axis is of size 1. -Dave From sebastian at sipsolutions.net Thu Aug 9 11:22:16 2012 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Thu, 09 Aug 2012 17:22:16 +0200 Subject: [Numpy-discussion] Bug in as_strided/reshape In-Reply-To: References: Message-ID: <1344525736.17185.8.camel@sebastian-laptop> Hello, looking at the code, when only adding/removing dimensions with size 1, numpy takes a small shortcut, however it uses 0 stride lengths as value for the new one element dimensions temporarily, then replacing it again to ensure the new array is contiguous. This replacing does not check if the dimension has more then size 1. Likely there is a better way to fix it, but the attached diff should do it. Regards, Sebastian On Do, 2012-08-09 at 13:06 +0000, Dave Hirschfeld wrote: > Dave Hirschfeld gmail.com> writes: > > > > > It seems that reshape doesn't work correctly on an array which has been > > resized using the 0-stride trick e.g. > > > > In [73]: x = array([5]) > > > > In [74]: y = as_strided(x, shape=(10,), strides=(0,)) > > > > In [75]: y > > Out[75]: array([5, 5, 5, 5, 5, 5, 5, 5, 5, 5]) > > > > In [76]: y.reshape([10,1]) > > Out[76]: > > array([[ 5], > > [ 8], > > [ 762933412], > > [-2013265919], > > [ 26], > > [ 64], > > [ 762933414], > > [-2013244356], > > [ 26], > > [ 64]]) <================ Should all be 5???????? > > > > In [77]: y.copy().reshape([10,1]) > > Out[77]: > > array([[5], > > [5], > > [5], > > [5], > > [5], > > [5], > > [5], > > [5], > > [5], > > [5]])--- a/numpy/core/src/multiarray/shape.c +++ b/numpy/core/src/multiarray/shape.c @@ -273,21 +273,21 @@ PyArray_Newshape(PyArrayObject *self, PyArray_Dims *newdims, * appropriate value to preserve contiguousness */ if (order == NPY_FORTRANORDER) { - if (strides[0] == 0) { + if ((strides[0] == 0) && (dimensions[0] == 1)) { strides[0] = PyArray_DESCR(self)->elsize; } for (i = 1; i < ndim; i++) { - if (strides[i] == 0) { + if ((strides[i] == 0) && (dimensions[i] == 1)) { strides[i] = strides[i-1] * dimensions[i-1]; } } } else { - if (strides[ndim-1] == 0) { + if ((strides[ndim-1] == 0) && (dimensions[ndim-1] == 1)) { strides[ndim-1] = PyArray_DESCR(self)->elsize; } for (i = ndim - 2; i > -1; i--) { - if (strides[i] == 0) { + if ((strides[i] == 0) && (dimensions[i] == 1)) { strides[i] = strides[i+1] * dimensions[i+1]; } } > > > > In [78]: np.__version__ > > Out[78]: '1.6.2' > > > > Perhaps a clause such as below is required in reshape? > > > > if any(stride == 0 for stride in y.strides): > > return y.copy().reshape(shape) > > else: > > return y.reshape(shape) > > > > Regards, > > Dave > > > > Though it would be good to avoid the copy which you should be able to do in this > case. Investigating further: > > In [15]: y.strides > Out[15]: (0,) > > In [16]: z = y.reshape([10,1]) > > In [17]: z.strides > Out[17]: (4, 4) > > In [18]: z.strides = (0, 4) > > In [19]: z > Out[19]: > array([[5], > [5], > [5], > [5], > [5], > [5], > [5], > [5], > [5], > [5]]) > > In [32]: y.reshape([5, 2]) > Out[32]: > array([[5, 5], > [5, 5], > [5, 5], > [5, 5], > [5, 5]]) > > In [33]: y.reshape([5, 2]).strides > Out[33]: (0, 0) > > So it seems that reshape is incorrectly setting the stride of axis0 to 4, but > only when the appended axis is of size 1. > > -Dave > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Fix-reshaping-of-arrays-with-stride-0-in-a-dimension.patch Type: text/x-patch Size: 1655 bytes Desc: not available URL: From chris.barker at noaa.gov Thu Aug 9 12:36:54 2012 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 9 Aug 2012 09:36:54 -0700 Subject: [Numpy-discussion] how to uninstall numpy In-Reply-To: <87mx24o5it.fsf@cica.cica> References: <87ehnktmq5.fsf@cica.cica> <87mx24o5it.fsf@cica.cica> Message-ID: It depends a bit on how you installed it, but for the most part you should simiply be able to delete the numpy directory in site_packages. -Chris On Thu, Aug 9, 2012 at 1:04 AM, wrote: > Thanks to everybody. > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From dave.hirschfeld at gmail.com Fri Aug 10 03:35:15 2012 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Fri, 10 Aug 2012 07:35:15 +0000 (UTC) Subject: [Numpy-discussion] =?utf-8?q?Bug_in_as=5Fstrided/reshape?= References: <1344525736.17185.8.camel@sebastian-laptop> Message-ID: Sebastian Berg sipsolutions.net> writes: > > Hello, > > looking at the code, when only adding/removing dimensions with size 1, > numpy takes a small shortcut, however it uses 0 stride lengths as value > for the new one element dimensions temporarily, then replacing it again > to ensure the new array is contiguous. > This replacing does not check if the dimension has more then size 1. > Likely there is a better way to fix it, but the attached diff should do > it. > > Regards, > > Sebastian > Thanks for the confirmation. So this doesn't get lost I've opened issue #380 on GitHub https://github.com/numpy/numpy/issues/380 -Dave From markbak at gmail.com Fri Aug 10 03:54:31 2012 From: markbak at gmail.com (Mark Bakker) Date: Fri, 10 Aug 2012 09:54:31 +0200 Subject: [Numpy-discussion] Second try: possible bug in assignment to complex array Message-ID: I am giving this a second try. Can anybody help me out? > > I think there is a problem with assigning a 1D complex array of length one > to a position in another complex array. > > Example: > > a = ones(1,'D') > b = ones(1,'D') > a[0] = b > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > in () > ----> 1 a[0] = b > > TypeError: can't convert complex to float > > This works correctly when a and b are real arrays: > > a = ones(1) > b = ones(1) > a[0] = b > > Bug or feature? > > Thanks, > > Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.hirschfeld at gmail.com Fri Aug 10 05:36:48 2012 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Fri, 10 Aug 2012 09:36:48 +0000 (UTC) Subject: [Numpy-discussion] =?utf-8?q?Second_try=3A_possible_bug_in_assign?= =?utf-8?q?ment_to=09complex_array?= References: Message-ID: Mark Bakker gmail.com> writes: > > I think there is a problem with assigning a 1D complex array of length one > to a position in another complex array. > Example: > a = ones(1,'D') > b = ones(1,'D') > a[0] = b > --------------------------------------------------------------------------- > TypeError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Traceback (most recent call last) > in () > ----> 1 a[0] = b > TypeError: can't convert complex to float > This works correctly when a and b are real arrays: > a = ones(1) > b = ones(1) > a[0] = b > Bug or feature? > Thanks, > Mark > I can't help unfortunately, but I can confirm that I also see the problem on Win32 Python 2.7.3, numpy 1.6.2. As a workaround it appears that slicing works: In [15]: sys.version Out[15]: '2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]' In [16]: sys.version Out[16]: '2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]' In [17]: np.__version__ Out[17]: '1.6.2' In [18]: a = ones(1,'D') In [19]: b = 2*ones(1,'D') In [20]: a[0] = b --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 a[0] = b TypeError: can't convert complex to float In [21]: a[0:1] = b In [22]: -Dave From silva at lma.cnrs-mrs.fr Fri Aug 10 05:44:21 2012 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Fri, 10 Aug 2012 11:44:21 +0200 Subject: [Numpy-discussion] Second try: possible bug in assignment to complex array In-Reply-To: References: Message-ID: <1344591861.1832.10.camel@amilo.coursju> Le vendredi 10 ao?t 2012, Dave Hirschfeld a ?crit : > Mark Bakker gmail.com> writes: > > I think there is a problem with assigning a 1D complex array of length one > > to a position in another complex array. > > Example: > > a = ones(1,'D') > > b = ones(1,'D') > > a[0] = b > > --------------------------------------------------------------------------- > > TypeError Traceback (most recent call last) > > in () > > ----> 1 a[0] = b > > TypeError: can't convert complex to float > > > > I can't help unfortunately, but I can confirm that I also see the problem > on Win32 Python 2.7.3, numpy 1.6.2. > As a workaround it appears that slicing works: Same on debian (unstable), Python 2.7, numpy 1.6.2 In [5]: a[0] = b TypeError: can't convert complex to float In [6]: a[0] = b[0] Other workarounds : asscalar and squeeze In [7]: a[0] = np.asscalar(b) In [8]: a[0] = b.squeeze() -- Fabrice Silva From paul.anton.letnes at gmail.com Fri Aug 10 06:37:39 2012 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Fri, 10 Aug 2012 12:37:39 +0200 Subject: [Numpy-discussion] Second try: possible bug in assignment to complex array In-Reply-To: References: Message-ID: <614610FE-F2D1-480D-B8D2-39EAE1BF6829@gmail.com> On 10. aug. 2012, at 09:54, Mark Bakker wrote: > I am giving this a second try. Can anybody help me out? > > I think there is a problem with assigning a 1D complex array of length one > to a position in another complex array. > > Example: > > a = ones(1,'D') > b = ones(1,'D') > a[0] = b > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > in () > ----> 1 a[0] = b > > TypeError: can't convert complex to float > > This works correctly when a and b are real arrays: > > a = ones(1) > b = ones(1) > a[0] = b > > Bug or feature? The exact same thing happens on OS X 10.7.4, python 2.7.3, numpy 1.6.1. Looks like a bug to me - or at least very surprising behavior. Paul From travis at continuum.io Fri Aug 10 10:00:04 2012 From: travis at continuum.io (Travis Oliphant) Date: Fri, 10 Aug 2012 09:00:04 -0500 Subject: [Numpy-discussion] Second try: possible bug in assignment to complex array In-Reply-To: <614610FE-F2D1-480D-B8D2-39EAE1BF6829@gmail.com> References: <614610FE-F2D1-480D-B8D2-39EAE1BF6829@gmail.com> Message-ID: <446B4442-10E2-40B0-AA81-CE07A6989B03@continuum.io> On Aug 10, 2012, at 5:37 AM, Paul Anton Letnes wrote: > > > On 10. aug. 2012, at 09:54, Mark Bakker wrote: > >> I am giving this a second try. Can anybody help me out? >> >> I think there is a problem with assigning a 1D complex array of length one >> to a position in another complex array. >> >> Example: >> >> a = ones(1,'D') >> b = ones(1,'D') >> a[0] = b >> --------------------------------------------------------------------------- >> TypeError Traceback (most recent call last) >> in () >> ----> 1 a[0] = b >> >> TypeError: can't convert complex to float >> >> This works correctly when a and b are real arrays: >> >> a = ones(1) >> b = ones(1) >> a[0] = b >> >> Bug or feature? > > The exact same thing happens on OS X 10.7.4, python 2.7.3, numpy 1.6.1. > > Looks like a bug to me - or at least very surprising behavior. This is definitely an inconsistency. The error seems more correct (though the error message needs improvement). Can someone try this on NumPy 1.5 and see if this inconsistency existed there as well. Thanks, -Travis > > Paul > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From josef.pktd at gmail.com Fri Aug 10 11:41:03 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 10 Aug 2012 11:41:03 -0400 Subject: [Numpy-discussion] Second try: possible bug in assignment to complex array In-Reply-To: <446B4442-10E2-40B0-AA81-CE07A6989B03@continuum.io> References: <614610FE-F2D1-480D-B8D2-39EAE1BF6829@gmail.com> <446B4442-10E2-40B0-AA81-CE07A6989B03@continuum.io> Message-ID: On Fri, Aug 10, 2012 at 10:00 AM, Travis Oliphant wrote: > > On Aug 10, 2012, at 5:37 AM, Paul Anton Letnes wrote: > > > > > > > On 10. aug. 2012, at 09:54, Mark Bakker wrote: > > > >> I am giving this a second try. Can anybody help me out? > >> > >> I think there is a problem with assigning a 1D complex array of length > one > >> to a position in another complex array. > >> > >> Example: > >> > >> a = ones(1,'D') > >> b = ones(1,'D') > >> a[0] = b > >> > --------------------------------------------------------------------------- > >> TypeError Traceback (most recent call > last) > >> in () > >> ----> 1 a[0] = b > >> > >> TypeError: can't convert complex to float > >> > >> This works correctly when a and b are real arrays: > >> > >> a = ones(1) > >> b = ones(1) > >> a[0] = b > >> > >> Bug or feature? > > > > The exact same thing happens on OS X 10.7.4, python 2.7.3, numpy 1.6.1. > > > > Looks like a bug to me - or at least very surprising behavior. > > This is definitely an inconsistency. The error seems more correct > (though the error message needs improvement). > > Can someone try this on NumPy 1.5 and see if this inconsistency existed > there as well. > >>> np.__version__ '1.5.1' >>> a = np.ones(1,'D') >>> b = np.ones(1,'D') >>> a[0] = b Traceback (most recent call last): File "", line 1, in TypeError: can't convert complex to float >>> a = np.ones(1) >>> b = np.ones(1) >>> a[0] = b Josef > Thanks, > > -Travis > > > > > Paul > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Aug 10 11:46:07 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 10 Aug 2012 11:46:07 -0400 Subject: [Numpy-discussion] Second try: possible bug in assignment to complex array In-Reply-To: References: <614610FE-F2D1-480D-B8D2-39EAE1BF6829@gmail.com> <446B4442-10E2-40B0-AA81-CE07A6989B03@continuum.io> Message-ID: On Fri, Aug 10, 2012 at 11:41 AM, wrote: > > > On Fri, Aug 10, 2012 at 10:00 AM, Travis Oliphant wrote: > >> >> On Aug 10, 2012, at 5:37 AM, Paul Anton Letnes wrote: >> >> > >> > >> > On 10. aug. 2012, at 09:54, Mark Bakker wrote: >> > >> >> I am giving this a second try. Can anybody help me out? >> >> >> >> I think there is a problem with assigning a 1D complex array of length >> one >> >> to a position in another complex array. >> >> >> >> Example: >> >> >> >> a = ones(1,'D') >> >> b = ones(1,'D') >> >> a[0] = b >> >> >> --------------------------------------------------------------------------- >> >> TypeError Traceback (most recent call >> last) >> >> in () >> >> ----> 1 a[0] = b >> >> >> >> TypeError: can't convert complex to float >> >> >> >> This works correctly when a and b are real arrays: >> >> >> >> a = ones(1) >> >> b = ones(1) >> >> a[0] = b >> >> >> >> Bug or feature? >> > >> > The exact same thing happens on OS X 10.7.4, python 2.7.3, numpy 1.6.1. >> > >> > Looks like a bug to me - or at least very surprising behavior. >> >> This is definitely an inconsistency. The error seems more correct >> (though the error message needs improvement). >> >> Can someone try this on NumPy 1.5 and see if this inconsistency existed >> there as well. >> > > >>> np.__version__ > '1.5.1' > > >>> a = np.ones(1,'D') > >>> b = np.ones(1,'D') > >>> a[0] = b > Traceback (most recent call last): > File "", line 1, in > > TypeError: can't convert complex to float > >>> a = np.ones(1) > >>> b = np.ones(1) > >>> a[0] = b > and >>> a = np.ones(1,'D') >>> b = 2*np.ones(1) >>> a[0] = b >>> a array([ 2.+0.j]) >>> c = 3*np.ones(1, int) >>> a[0] = c >>> a array([ 3.+0.j]) > > Josef > > >> Thanks, >> >> -Travis >> >> > >> > Paul >> > _______________________________________________ >> > NumPy-Discussion mailing list >> > NumPy-Discussion at scipy.org >> > http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergio.pasra at gmail.com Sat Aug 11 06:23:55 2012 From: sergio.pasra at gmail.com (Sergio Pascual) Date: Sat, 11 Aug 2012 12:23:55 +0200 Subject: [Numpy-discussion] Limit in NpyIter_MultiNew: NPY_MAXARGS Message-ID: I have written a program that makes use of NpyIter_Multi to iterate simultaneously over a large number of 2d arrays. But I have noticed that there is a limit of 32 in the number of simultaneous iterators allowed. This number is NPY_MAXARGS. I typically need to iterate over hundreds of images. So, what is the rationale to have the limit in 32 instead of, say, 1024? If this number cannot easily be increased in the source code, what is the alternative to iterate over a large number of images? Could it be an array of NpyIter objects? Regards, Sergio From markbak at gmail.com Sat Aug 11 07:16:40 2012 From: markbak at gmail.com (Mark Bakker) Date: Sat, 11 Aug 2012 13:16:40 +0200 Subject: [Numpy-discussion] Second try: possible bug in assignment to complex array Message-ID: Shall I file a bug report? Or is this fairly easy to fix? Mark > > On Fri, Aug 10, 2012 at 11:41 AM, wrote: > > > > > > > On Fri, Aug 10, 2012 at 10:00 AM, Travis Oliphant >wrote: > > > >> > >> On Aug 10, 2012, at 5:37 AM, Paul Anton Letnes wrote: > >> > >> > > >> > > >> > On 10. aug. 2012, at 09:54, Mark Bakker wrote: > >> > > >> >> I am giving this a second try. Can anybody help me out? > >> >> > >> >> I think there is a problem with assigning a 1D complex array of > length > >> one > >> >> to a position in another complex array. > >> >> > >> >> Example: > >> >> > >> >> a = ones(1,'D') > >> >> b = ones(1,'D') > >> >> a[0] = b > >> >> > >> > --------------------------------------------------------------------------- > >> >> TypeError Traceback (most recent call > >> last) > >> >> in () > >> >> ----> 1 a[0] = b > >> >> > >> >> TypeError: can't convert complex to float > >> >> > >> >> This works correctly when a and b are real arrays: > >> >> > >> >> a = ones(1) > >> >> b = ones(1) > >> >> a[0] = b > >> >> > >> >> Bug or feature? > >> > > >> > The exact same thing happens on OS X 10.7.4, python 2.7.3, numpy > 1.6.1. > >> > > >> > Looks like a bug to me - or at least very surprising behavior. > >> > >> This is definitely an inconsistency. The error seems more correct > >> (though the error message needs improvement). > >> > >> Can someone try this on NumPy 1.5 and see if this inconsistency existed > >> there as well. > >> > > > > >>> np.__version__ > > '1.5.1' > > > > >>> a = np.ones(1,'D') > > >>> b = np.ones(1,'D') > > >>> a[0] = b > > Traceback (most recent call last): > > File "", line 1, in > > > > TypeError: can't convert complex to float > > >>> a = np.ones(1) > > >>> b = np.ones(1) > > >>> a[0] = b > > > > and > > >>> a = np.ones(1,'D') > >>> b = 2*np.ones(1) > >>> a[0] = b > >>> a > array([ 2.+0.j]) > >>> c = 3*np.ones(1, int) > >>> a[0] = c > >>> a > array([ 3.+0.j]) > > > > > > > Josef > > > > > >> Thanks, > >> > >> -Travis > >> > >> > > >> > Paul > >> > _______________________________________________ > >> > NumPy-Discussion mailing list > >> > NumPy-Discussion at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/numpy-discussion > >> > >> _______________________________________________ > >> NumPy-Discussion mailing list > >> NumPy-Discussion at scipy.org > >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > >> > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/numpy-discussion/attachments/20120810/05588327/attachment.html > > ------------------------------ > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > End of NumPy-Discussion Digest, Vol 71, Issue 18 > ************************************************ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sat Aug 11 20:36:28 2012 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 11 Aug 2012 17:36:28 -0700 Subject: [Numpy-discussion] Slow divide of int64? Message-ID: Hi, A friend of mine just pointed out that dividing by int64 is considerably slower than multiplying in numpy: gives (64 bit Debian Intel system, numpy trunk): Mul32 2.71295905113 Div32 6.61985301971 Mul64 2.78101611137 Div64 22.8217148781 with similar values for numpy 1.5.1. Crude testing with Matlab and Octave suggests they do not seem to have this same difference: >> divtest Mul32 4.300662 Div32 5.638622 Mul64 7.894490 Div64 18.121182 octave:2> divtest Mul32 3.960577 Div32 6.553704 Mul64 7.268324 Div64 13.670760 (files attached) Is there something specific about division in numpy that would cause this slowdown? Cheers, Matthew -------------- next part -------------- A non-text attachment was scrubbed... Name: divtest.m Type: application/octet-stream Size: 344 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: timeit.m Type: application/octet-stream Size: 65 bytes Desc: not available URL: From silva at lma.cnrs-mrs.fr Sun Aug 12 06:41:50 2012 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Sun, 12 Aug 2012 12:41:50 +0200 Subject: [Numpy-discussion] A step toward merging odeint and ode Message-ID: <1344768110.26417.6.camel@amilo.coursju> I made a pull request [1] to integrate the LSODA solver that is used in odeint into the modular scipy.integrate.ode generic class. In a similar way as for vode, it just wraps the already present lsoda.f file (see .pyf file) and exposes it within an IntegratorBase subclass adjusting the coefficients before calling lsoda. Note that lsoda provide automatic switching between stiff and non-stiff methods, feature that is not present in the available vode integrator. Final note: tests are ok! Regards, [1] https://github.com/scipy/scipy/pull/273 -- Fabrice Silva From ondrej.certik at gmail.com Mon Aug 13 00:13:21 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Sun, 12 Aug 2012 21:13:21 -0700 Subject: [Numpy-discussion] Vagrant VM for building NumPy (1.7.x) Windows binaries Message-ID: Hi, I've created this repository: https://github.com/certik/numpy-vendor which uses Vagrant and Fabric to fully automate the setup & creation of NumPy binaries for Windows. The setup is especially tricky, I've thought several times already that I nailed it, and then always new things pop up. One can of course install things directly in Ubuntu, but it's tricky, there are a lot of things that can go wrong. The above approach should be 100% reproducible. So hopefully this repository will be useful for somebody new (like I am) to numpy releases. Also my hope is that more people can help out with the release just by running it on their machines and/or sending PRs against this repository. All essential logic is in these two files: https://github.com/certik/numpy-vendor/blob/master/fabfile.py https://github.com/certik/numpy-vendor/blob/master/setup-wine.sh The setup-wine.sh can be used directly in Ubuntu as well (it erases ~/.wine and reinstalls it). Times: on my computer and internet speed it takes a little less than 1h to prepare the VM (it needs to download the 300MB Ubuntu, then about 600MB of Latex, and few MB of other stuff, and then about 12 minutes of configuring), and about 0.5 for each binary, so about 6h total (the docs only take a few minutes). After doing all this, I just realized that I forgot to build numpy against atlas. So I will need to fix that. But apart from that, it should be ok. Travis, should I upload the binaries to sourceforge and do the beta release with this? Or should we wait until I fix the atlas. Also I need to do the Mac binaries. I'll have the Win binaries in about 6h from now, I am running one final build from scratch now. My apologies that it took me longer than expected. Ondrej P.S. Here is a PR for review (you need to apply it manually if you want the above to work): https://github.com/numpy/numpy/pull/384 From njs at pobox.com Mon Aug 13 05:20:12 2012 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 13 Aug 2012 10:20:12 +0100 Subject: [Numpy-discussion] Vagrant VM for building NumPy (1.7.x) Windows binaries In-Reply-To: References: Message-ID: On Mon, Aug 13, 2012 at 5:13 AM, Ond?ej ?ert?k wrote: > All essential logic is in these two files: > > https://github.com/certik/numpy-vendor/blob/master/fabfile.py > https://github.com/certik/numpy-vendor/blob/master/setup-wine.sh > > The setup-wine.sh can be used directly in Ubuntu as well (it erases > ~/.wine and reinstalls it). If you want setup-wine.sh to be generally useful, I'd suggest - moving the dangerous rm -rf logic to the fabfile instead - having it set up into a user-specified directory instead of ~/.wine. My ~/.wine already has stuff in it... With wine it is very trivial to create a new "windows VM" -- you just set the environment variable WINEPREFIX to point to any directory, which then acts like ~/.wine. It's often recommended therefore to install every windows program into its own pristine WINEPREFIX, so as to avoid cross-contamination, make it easier to blow things away when necessary, and generally avoid the difficulty of actually administering windows. Cheers, -n From cournape at gmail.com Mon Aug 13 09:46:10 2012 From: cournape at gmail.com (David Cournapeau) Date: Mon, 13 Aug 2012 14:46:10 +0100 Subject: [Numpy-discussion] Vagrant VM for building NumPy (1.7.x) Windows binaries In-Reply-To: References: Message-ID: Hi Ondrej, On Mon, Aug 13, 2012 at 5:13 AM, Ond?ej ?ert?k wrote: > Hi, > > I've created this repository: > > https://github.com/certik/numpy-vendor > > which uses Vagrant and Fabric to fully automate the setup & creation > of NumPy binaries for Windows. The setup is especially tricky, > I've thought several times already that I nailed it, and then always > new things pop up. One can of course install things directly in > Ubuntu, but it's tricky, there are a lot of things that can go wrong. > The above approach should be 100% reproducible. So hopefully this > repository will be useful > for somebody new (like I am) to numpy releases. Also my hope is that > more people can help out with the release just by running it on their > machines and/or sending PRs against this repository. Thanks for doing this. I think vagrant is the way to go. I myself have some stuff for native windows and vagrant (much more painful, but sometimes necessary unfortunately). Did you see veewee to create vagrant boxes ? It simplifies quite a few things, but maybe they matter more on windows than on linux, where this kind of things is much simpler. David From ondrej.certik at gmail.com Mon Aug 13 11:06:45 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Mon, 13 Aug 2012 08:06:45 -0700 Subject: [Numpy-discussion] Vagrant VM for building NumPy (1.7.x) Windows binaries In-Reply-To: References: Message-ID: On Mon, Aug 13, 2012 at 6:46 AM, David Cournapeau wrote: > Hi Ondrej, > > On Mon, Aug 13, 2012 at 5:13 AM, Ond?ej ?ert?k wrote: >> Hi, >> >> I've created this repository: >> >> https://github.com/certik/numpy-vendor >> >> which uses Vagrant and Fabric to fully automate the setup & creation >> of NumPy binaries for Windows. The setup is especially tricky, >> I've thought several times already that I nailed it, and then always >> new things pop up. One can of course install things directly in >> Ubuntu, but it's tricky, there are a lot of things that can go wrong. >> The above approach should be 100% reproducible. So hopefully this >> repository will be useful >> for somebody new (like I am) to numpy releases. Also my hope is that >> more people can help out with the release just by running it on their >> machines and/or sending PRs against this repository. > > Thanks for doing this. I think vagrant is the way to go. I myself have > some stuff for native windows and vagrant (much more painful, but > sometimes necessary unfortunately). > > Did you see veewee to create vagrant boxes ? It simplifies quite a few > things, but maybe they matter more on windows than on linux, where > this kind of things is much simpler. Nice, I'll try it. So far I was using https://github.com/cal/vagrant-ubuntu-precise-64 and it doesn't work for me: https://github.com/cal/vagrant-ubuntu-precise-64/issues/10 Nathan, thanks for the WINEPREFIX trick, I didn't know about that. You are right, the wine itself is pretty much isolated VM. Unfortunately the things around it aren't. I'll see after the release, if there is interest, I'll be happy to make the scripts more general. Ondrej P.S. The total building time for all binaries was actually less than 2h (I miscalculated it). So that's very usable. From sergio.pasra at gmail.com Mon Aug 13 12:16:43 2012 From: sergio.pasra at gmail.com (Sergio Pascual) Date: Mon, 13 Aug 2012 18:16:43 +0200 Subject: [Numpy-discussion] Iterating over a given axis with buffering Message-ID: Hi, I have a 3d array of bitswapped data, dimensions 2000x2000x20. I want to apply a function over the third axis to obtain a 2000x2000 array. This is what numpy.apply_along_axis does, but I need a faster version, so I'm writing it in C. I started by writing a generic ufunc version of the code. It is not what I need, because my particular function has several free parameters, but with this version I could get a hint of the performance. The ufunc was nice, because it handled the bitswapped data, and naturally the innermost loop was over the third axis. So I tried to write the final version using NpyIter. The only way I found to do the innermost loop over the third axis was passing the flag NPY_ITER_MULTI_INDEX, but then I can't use NPY_ITER_BUFFERED to handle my bitswapped, at least in numpy 1.6 they couldn't be used simultaneously. Any hint on how I can do this? Regards, Sergio From ondrej.certik at gmail.com Mon Aug 13 12:22:11 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Mon, 13 Aug 2012 09:22:11 -0700 Subject: [Numpy-discussion] Vagrant VM for building NumPy (1.7.x) Windows binaries In-Reply-To: References: Message-ID: On Mon, Aug 13, 2012 at 8:06 AM, Ond?ej ?ert?k wrote: > On Mon, Aug 13, 2012 at 6:46 AM, David Cournapeau wrote: [...] >> Did you see veewee to create vagrant boxes ? It simplifies quite a few >> things, but maybe they matter more on windows than on linux, where >> this kind of things is much simpler. > > Nice, I'll try it. Sadly, it does not work either for me: https://github.com/jedi4ever/veewee/issues/361 Ondrej From ralf.gommers at gmail.com Mon Aug 13 14:30:44 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 13 Aug 2012 20:30:44 +0200 Subject: [Numpy-discussion] ANN: SciPy 0.11.0 release candidate 2 Message-ID: Hi, I am pleased to announce the availability of the second release candidate of SciPy 0.11.0. For this release many new features have been added, and over 120 tickets and pull requests have been closed. Also noteworthy is that the number of contributors for this release has risen to over 50. Some of the highlights are: - A new module, sparse.csgraph, has been added which provides a number of common sparse graph algorithms. - New unified interfaces to the existing optimization and root finding functions have been added. Sources and binaries can be found at http://sourceforge.net/projects/scipy/files/scipy/0.11.0rc2/, release notes are copied below. For this release candidate all known issues (with the exception of one Qhull issue on Debian, s390x platform) have been solved. In the meantime also OS X 10.8 was released, this RC contains a few build fixes for that platform. If no more serious issues are reported, the final release will be in one week. Cheers, Ralf ========================== SciPy 0.11.0 Release Notes ========================== .. note:: Scipy 0.11.0 is not released yet! .. contents:: SciPy 0.11.0 is the culmination of 8 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. Highlights of this release are: - A new module has been added which provides a number of common sparse graph algorithms. - New unified interfaces to the existing optimization and root finding functions have been added. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Our development attention will now shift to bug-fix releases on the 0.11.x branch, and on adding new features on the master branch. This release requires Python 2.4-2.7 or 3.1-3.2 and NumPy 1.5.1 or greater. New features ============ Sparse Graph Submodule ---------------------- The new submodule :mod:`scipy.sparse.csgraph` implements a number of efficient graph algorithms for graphs stored as sparse adjacency matrices. Available routines are: - :func:`connected_components` - determine connected components of a graph - :func:`laplacian` - compute the laplacian of a graph - :func:`shortest_path` - compute the shortest path between points on a positive graph - :func:`dijkstra` - use Dijkstra's algorithm for shortest path - :func:`floyd_warshall` - use the Floyd-Warshall algorithm for shortest path - :func:`breadth_first_order` - compute a breadth-first order of nodes - :func:`depth_first_order` - compute a depth-first order of nodes - :func:`breadth_first_tree` - construct the breadth-first tree from a given node - :func:`depth_first_tree` - construct a depth-first tree from a given node - :func:`minimum_spanning_tree` - construct the minimum spanning tree of a graph ``scipy.optimize`` improvements ------------------------------- The optimize module has received a lot of attention this release. In addition to added tests, documentation improvements, bug fixes and code clean-up, the following improvements were made: - A unified interface to minimizers of univariate and multivariate functions has been added. - A unified interface to root finding algorithms for multivariate functions has been added. - The L-BFGS-B algorithm has been updated to version 3.0. Unified interfaces to minimizers ```````````````````````````````` Two new functions ``scipy.optimize.minimize`` and ``scipy.optimize.minimize_scalar`` were added to provide a common interface to minimizers of multivariate and univariate functions respectively. For multivariate functions, ``scipy.optimize.minimize`` provides an interface to methods for unconstrained optimization (`fmin`, `fmin_powell`, `fmin_cg`, `fmin_ncg`, `fmin_bfgs` and `anneal`) or constrained optimization (`fmin_l_bfgs_b`, `fmin_tnc`, `fmin_cobyla` and `fmin_slsqp`). For univariate functions, ``scipy.optimize.minimize_scalar`` provides an interface to methods for unconstrained and bounded optimization (`brent`, `golden`, `fminbound`). This allows for easier comparing and switching between solvers. Unified interface to root finding algorithms ```````````````````````````````````````````` The new function ``scipy.optimize.root`` provides a common interface to root finding algorithms for multivariate functions, embeding `fsolve`, `leastsq` and `nonlin` solvers. ``scipy.linalg`` improvements ----------------------------- New matrix equation solvers ``````````````````````````` Solvers for the Sylvester equation (``scipy.linalg.solve_sylvester``, discrete and continuous Lyapunov equations (``scipy.linalg.solve_lyapunov``, ``scipy.linalg.solve_discrete_lyapunov``) and discrete and continuous algebraic Riccati equations (``scipy.linalg.solve_continuous_are``, ``scipy.linalg.solve_discrete_are``) have been added to ``scipy.linalg``. These solvers are often used in the field of linear control theory. QZ and QR Decomposition ```````````````````````` It is now possible to calculate the QZ, or Generalized Schur, decomposition using ``scipy.linalg.qz``. This function wraps the LAPACK routines sgges, dgges, cgges, and zgges. The function ``scipy.linalg.qr_multiply``, which allows efficient computation of the matrix product of Q (from a QR decompostion) and a vector, has been added. Pascal matrices ``````````````` A function for creating Pascal matrices, ``scipy.linalg.pascal``, was added. Sparse matrix construction and operations ----------------------------------------- Two new functions, ``scipy.sparse.diags`` and ``scipy.sparse.block_diag``, were added to easily construct diagonal and block-diagonal sparse matrices respectively. ``scipy.sparse.csc_matrix`` and ``csr_matrix`` now support the operations ``sin``, ``tan``, ``arcsin``, ``arctan``, ``sinh``, ``tanh``, ``arcsinh``, ``arctanh``, ``rint``, ``sign``, ``expm1``, ``log1p``, ``deg2rad``, ``rad2deg``, ``floor``, ``ceil`` and ``trunc``. Previously, these operations had to be performed by operating on the matrices' ``data`` attribute. LSMR iterative solver --------------------- LSMR, an iterative method for solving (sparse) linear and linear least-squares systems, was added as ``scipy.sparse.linalg.lsmr``. Discrete Sine Transform ----------------------- Bindings for the discrete sine transform functions have been added to ``scipy.fftpack``. ``scipy.interpolate`` improvements ---------------------------------- For interpolation in spherical coordinates, the three classes ``scipy.interpolate.SmoothSphereBivariateSpline``, ``scipy.interpolate.LSQSphereBivariateSpline``, and ``scipy.interpolate.RectSphereBivariateSpline`` have been added. Binned statistics (``scipy.stats``) ----------------------------------- The stats module has gained functions to do binned statistics, which are a generalization of histograms, in 1-D, 2-D and multiple dimensions: ``scipy.stats.binned_statistic``, ``scipy.stats.binned_statistic_2d`` and ``scipy.stats.binned_statistic_dd``. Deprecated features =================== ``scipy.sparse.cs_graph_components`` has been made a part of the sparse graph submodule, and renamed to ``scipy.sparse.csgraph.connected_components``. Calling the former routine will result in a deprecation warning. ``scipy.misc.radon`` has been deprecated. A more full-featured radon transform can be found in scikits-image. ``scipy.io.save_as_module`` has been deprecated. A better way to save multiple Numpy arrays is the ``numpy.savez`` function. The `xa` and `xb` parameters for all distributions in ``scipy.stats.distributions`` already weren't used; they have now been deprecated. Backwards incompatible changes ============================== Removal of ``scipy.maxentropy`` ------------------------------- The ``scipy.maxentropy`` module, which was deprecated in the 0.10.0 release, has been removed. Logistic regression in scikits.learn is a good and modern alternative for this functionality. Minor change in behavior of ``splev`` ------------------------------------- The spline evaluation function now behaves similarly to ``interp1d`` for size-1 arrays. Previous behavior:: >>> from scipy.interpolate import splev, splrep, interp1d >>> x = [1,2,3,4,5] >>> y = [4,5,6,7,8] >>> tck = splrep(x, y) >>> splev([1], tck) 4. >>> splev(1, tck) 4. Corrected behavior:: >>> splev([1], tck) array([ 4.]) >>> splev(1, tck) array(4.) This affects also the ``UnivariateSpline`` classes. Behavior of ``scipy.integrate.complex_ode`` ------------------------------------------- The behavior of the ``y`` attribute of ``complex_ode`` is changed. Previously, it expressed the complex-valued solution in the form:: z = ode.y[::2] + 1j * ode.y[1::2] Now, it is directly the complex-valued solution:: z = ode.y Minor change in behavior of T-tests ----------------------------------- The T-tests ``scipy.stats.ttest_ind``, ``scipy.stats.ttest_rel`` and ``scipy.stats.ttest_1samp`` have been changed so that 0 / 0 now returns NaN instead of 1. Other changes ============= The SuperLU sources in ``scipy.sparse.linalg`` have been updated to version 4.3 from upstream. The function ``scipy.signal.bode``, which calculates magnitude and phase data for a continuous-time system, has been added. The two-sample T-test ``scipy.stats.ttest_ind`` gained an option to compare samples with unequal variances, i.e. Welch's T-test. ``scipy.misc.logsumexp`` now takes an optional ``axis`` keyword argument. Authors ======= This release contains work by the following people (contributed at least one patch to this release, names in alphabetical order): * Jeff Armstrong * Chad Baker * Brandon Beacher + * behrisch + * borishim + * Matthew Brett * Lars Buitinck * Luis Pedro Coelho + * Johann Cohen-Tanugi * David Cournapeau * dougal + * Ali Ebrahim + * endolith + * Bj?rn Forsman + * Robert Gantner + * Sebastian Gassner + * Christoph Gohlke * Ralf Gommers * Yaroslav Halchenko * Charles Harris * Jonathan Helmus + * Andreas Hilboll + * Marc Honnorat + * Jonathan Hunt + * Maxim Ivanov + * Thouis (Ray) Jones * Christopher Kuster + * Josh Lawrence + * Denis Laxalde + * Travis Oliphant * Joonas Paalasmaa + * Fabian Pedregosa * Josef Perktold * Gavin Price + * Jim Radford + * Andrew Schein + * Skipper Seabold * Jacob Silterra + * Scott Sinclair * Alexis Tabary + * Martin Teichmann * Matt Terry + * Nicky van Foreest + * Jacob Vanderplas * Patrick Varilly + * Pauli Virtanen * Nils Wagner + * Darryl Wally + * Stefan van der Walt * Liming Wang + * David Warde-Farley + * Warren Weckesser * Sebastian Werk + * Mike Wimmer + * Tony S Yu + A total of 55 people contributed to this release. People with a "+" by their names contributed a patch for the first time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Aug 14 00:32:42 2012 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 13 Aug 2012 22:32:42 -0600 Subject: [Numpy-discussion] Slow divide of int64? In-Reply-To: References: Message-ID: On Sat, Aug 11, 2012 at 6:36 PM, Matthew Brett wrote: > Hi, > > A friend of mine just pointed out that dividing by int64 is > considerably slower than multiplying in numpy: > > > > gives (64 bit Debian Intel system, numpy trunk): > > Mul32 2.71295905113 > Div32 6.61985301971 > Mul64 2.78101611137 > Div64 22.8217148781 > > with similar values for numpy 1.5.1. > > Crude testing with Matlab and Octave suggests they do not seem to have > this same difference: > > >> divtest > Mul32 4.300662 > Div32 5.638622 > Mul64 7.894490 > Div64 18.121182 > > octave:2> divtest > Mul32 3.960577 > Div32 6.553704 > Mul64 7.268324 > Div64 13.670760 > > (files attached) > > Is there something specific about division in numpy that would cause > this slowdown? > > Numpy is doing an integer divide unless you are using Python 3.x. The np.true_divide ufunc will speed things up a bit. I'm not sure what Matlab/Octave are doing for division in this case. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej.certik at gmail.com Tue Aug 14 00:34:38 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Mon, 13 Aug 2012 21:34:38 -0700 Subject: [Numpy-discussion] how to use numpy-vendor Message-ID: Hi, How should one use the "vendor" repository (https://github.com/numpy/vendor) in Wine? Should I put the binaries into .wine/drive_c/Python25/libs/, or somewhere else? I've search all mailinglists and I didn't find any information on it. I vaguely remember that somebody mentioned it somewhere, but I am not able to find it. Once I understand it, I'll send a PR updating the README. I've played with OpenBlas and managed to compile numpy with it on linux, following the tutorial [1] and it works, so at least on linux it's clear to me. In wine, since all the binaries are .a files, the only way to check that it works is to install it and check that things like eigh() are much faster. Is there some other way? On linux using openblas as an .so library, I just do "ldd" and all is clear. Ondrej [1] http://www.der-schnorz.de/2012/06/optimized-linear-algebra-and-numpyscipy/ From charlesr.harris at gmail.com Tue Aug 14 00:49:57 2012 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 13 Aug 2012 22:49:57 -0600 Subject: [Numpy-discussion] Slow divide of int64? In-Reply-To: References: Message-ID: On Mon, Aug 13, 2012 at 10:32 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Sat, Aug 11, 2012 at 6:36 PM, Matthew Brett wrote: > >> Hi, >> >> A friend of mine just pointed out that dividing by int64 is >> considerably slower than multiplying in numpy: >> >> >> >> gives (64 bit Debian Intel system, numpy trunk): >> >> Mul32 2.71295905113 >> Div32 6.61985301971 >> Mul64 2.78101611137 >> Div64 22.8217148781 >> >> with similar values for numpy 1.5.1. >> >> Crude testing with Matlab and Octave suggests they do not seem to have >> this same difference: >> >> >> divtest >> Mul32 4.300662 >> Div32 5.638622 >> Mul64 7.894490 >> Div64 18.121182 >> >> octave:2> divtest >> Mul32 3.960577 >> Div32 6.553704 >> Mul64 7.268324 >> Div64 13.670760 >> >> (files attached) >> >> Is there something specific about division in numpy that would cause >> this slowdown? >> >> > Numpy is doing an integer divide unless you are using Python 3.x. The > np.true_divide ufunc will speed things up a bit. I'm not sure what > Matlab/Octave are doing for division in this case. > > For int64: In [23]: timeit multiply(a, b) 100000 loops, best of 3: 3.31 us per loop In [24]: timeit true_divide(a, b) 100000 loops, best of 3: 9.35 us per loop > Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Tue Aug 14 06:06:51 2012 From: cournape at gmail.com (David Cournapeau) Date: Tue, 14 Aug 2012 11:06:51 +0100 Subject: [Numpy-discussion] how to use numpy-vendor In-Reply-To: References: Message-ID: Hi Ondrej, On Tue, Aug 14, 2012 at 5:34 AM, Ond?ej ?ert?k wrote: > Hi, > > How should one use the "vendor" repository (https://github.com/numpy/vendor) > in Wine? Should I put the binaries into .wine/drive_c/Python25/libs/, > or somewhere else? > I've search all mailinglists and I didn't find any information on it. > I vaguely remember > that somebody mentioned it somewhere, but I am not able to find it. > Once I understand it, > I'll send a PR updating the README. There is no information on vendor: that's a repo I set up to avoid polluting the main repo with all the binary stuff that used to be in SVN. The principle is to put binaries used to *build* numpy, but we don't put anything there for end-users. What binaries do you need to put there ? Numpy binaries are usually put on sourceforge (although I would be more than happy to have a suggestion for a better way because uploading on sourceforge is the very definition of pain). David From njs at pobox.com Tue Aug 14 06:22:09 2012 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 14 Aug 2012 11:22:09 +0100 Subject: [Numpy-discussion] how to use numpy-vendor In-Reply-To: References: Message-ID: On Tue, Aug 14, 2012 at 11:06 AM, David Cournapeau wrote: > Hi Ondrej, > > On Tue, Aug 14, 2012 at 5:34 AM, Ond?ej ?ert?k wrote: >> Hi, >> >> How should one use the "vendor" repository (https://github.com/numpy/vendor) >> in Wine? Should I put the binaries into .wine/drive_c/Python25/libs/, >> or somewhere else? >> I've search all mailinglists and I didn't find any information on it. >> I vaguely remember >> that somebody mentioned it somewhere, but I am not able to find it. >> Once I understand it, >> I'll send a PR updating the README. > > There is no information on vendor: that's a repo I set up to avoid > polluting the main repo with all the binary stuff that used to be in > SVN. The principle is to put binaries used to *build* numpy, but we > don't put anything there for end-users. > > What binaries do you need to put there ? Numpy binaries are usually > put on sourceforge (although I would be more than happy to have a > suggestion for a better way because uploading on sourceforge is the > very definition of pain). I think he's asking how to use the binaries in numpy-vendor to build a release version of numpy. -n From d.s.seljebotn at astro.uio.no Tue Aug 14 06:33:02 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Tue, 14 Aug 2012 12:33:02 +0200 Subject: [Numpy-discussion] how to use numpy-vendor In-Reply-To: References: Message-ID: <502A295E.3020005@astro.uio.no> On 08/14/2012 06:34 AM, Ond?ej ?ert?k wrote: > Hi, > > How should one use the "vendor" repository (https://github.com/numpy/vendor) > in Wine? Should I put the binaries into .wine/drive_c/Python25/libs/, > or somewhere else? > I've search all mailinglists and I didn't find any information on it. > I vaguely remember > that somebody mentioned it somewhere, but I am not able to find it. > Once I understand it, > I'll send a PR updating the README. > > > I've played with OpenBlas and managed to compile numpy with it on linux, > following the tutorial [1] and it works, so at least on linux it's clear to me. One thing to be aware of with OpenBlas is that it is *very* tuned to the CPU at hand. As in, every CPU has hand-coded *assembly* and the makefile more or less probes for which specific CPU generation from which vendor you have and compiles and link the corresponding assembly file. So you may have to take some care that you don't compile and ship a version that breaks if you don't have SSE3 installed etc... (Unless OpenBlas has changed recently. I'm not saying I'm right, I'm saying it should be looked into.) Dag > > In wine, since all the binaries are .a files, the only way to check > that it works is to > install it and check that things like eigh() are much faster. Is there > some other way? > On linux using openblas as an .so library, I just do "ldd" and all is clear. > > Ondrej > > > [1] http://www.der-schnorz.de/2012/06/optimized-linear-algebra-and-numpyscipy/ > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From cournape at gmail.com Tue Aug 14 06:43:44 2012 From: cournape at gmail.com (David Cournapeau) Date: Tue, 14 Aug 2012 11:43:44 +0100 Subject: [Numpy-discussion] how to use numpy-vendor In-Reply-To: References: Message-ID: On Tue, Aug 14, 2012 at 11:22 AM, Nathaniel Smith wrote: > On Tue, Aug 14, 2012 at 11:06 AM, David Cournapeau wrote: >> Hi Ondrej, >> >> On Tue, Aug 14, 2012 at 5:34 AM, Ond?ej ?ert?k wrote: >>> Hi, >>> >>> How should one use the "vendor" repository (https://github.com/numpy/vendor) >>> in Wine? Should I put the binaries into .wine/drive_c/Python25/libs/, >>> or somewhere else? >>> I've search all mailinglists and I didn't find any information on it. >>> I vaguely remember >>> that somebody mentioned it somewhere, but I am not able to find it. >>> Once I understand it, >>> I'll send a PR updating the README. >> >> There is no information on vendor: that's a repo I set up to avoid >> polluting the main repo with all the binary stuff that used to be in >> SVN. The principle is to put binaries used to *build* numpy, but we >> don't put anything there for end-users. >> >> What binaries do you need to put there ? Numpy binaries are usually >> put on sourceforge (although I would be more than happy to have a >> suggestion for a better way because uploading on sourceforge is the >> very definition of pain). > > I think he's asking how to use the binaries in numpy-vendor to build a > release version of numpy. Hm, good point, I don't know why I read putting .wine stuff into vendor instead of the opposite. Anyway, the way to use the binaries is to put them in some known location, e.g. C:\local ($WINEPREFIX/drive_c/local for wine), and copy the nosse/sse2/sse3 directories in there. For example: C:\local\lib\yop\nosse C:\local\lib\yop\sse2 ... This is then referred through env by the pavement script (see https://github.com/numpy/numpy/blob/master/pavement.py#L143). Renaming yop to atlas would be a good idea, don't know why I let that non-descriptive name in there. Manually, you can just do something like "ATLAS=C:\local\lib\yop\sse2 python setup.py build", but being careful about how env variables are passed between shell and wine (don't remember the details). Note that the nosse is not ATLAS, but straight netlib libs, which is why in that case you need to use BLAS=... LAPACK=... I would strongly suggest not to use openblas for this release, because of all the issues related to CPU tuning. We could certainly update a bit what we have in there, but building windows binaries is big enough of a pain, that you don't want to do everything at once I think, especially testing/building blas on windows is very time consuming. David From nouiz at nouiz.org Tue Aug 14 09:32:44 2012 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Tue, 14 Aug 2012 09:32:44 -0400 Subject: [Numpy-discussion] how to use numpy-vendor In-Reply-To: <502A295E.3020005@astro.uio.no> References: <502A295E.3020005@astro.uio.no> Message-ID: On Tue, Aug 14, 2012 at 6:33 AM, Dag Sverre Seljebotn wrote: > On 08/14/2012 06:34 AM, Ond?ej ?ert?k wrote: >> Hi, >> >> How should one use the "vendor" repository (https://github.com/numpy/vendor) >> in Wine? Should I put the binaries into .wine/drive_c/Python25/libs/, >> or somewhere else? >> I've search all mailinglists and I didn't find any information on it. >> I vaguely remember >> that somebody mentioned it somewhere, but I am not able to find it. >> Once I understand it, >> I'll send a PR updating the README. >> >> >> I've played with OpenBlas and managed to compile numpy with it on linux, >> following the tutorial [1] and it works, so at least on linux it's clear to me. > > One thing to be aware of with OpenBlas is that it is *very* tuned to the > CPU at hand. As in, every CPU has hand-coded *assembly* and the makefile > more or less probes for which specific CPU generation from which vendor > you have and compiles and link the corresponding assembly file. So you > may have to take some care that you don't compile and ship a version > that breaks if you don't have SSE3 installed etc... > > (Unless OpenBlas has changed recently. I'm not saying I'm right, I'm > saying it should be looked into.) OpenBlas as the option to build all version and select at run time the right one. MKL do the same. But I never tested it. So I don't know how well it work. The other option would be to force an older CPU that support only sse2. ATLAS do that by default. OpenBLAS select the best one for the computer where it is being built by default. Fred From aron at ahmadia.net Tue Aug 14 09:47:48 2012 From: aron at ahmadia.net (Aron Ahmadia) Date: Tue, 14 Aug 2012 16:47:48 +0300 Subject: [Numpy-discussion] minor threads-related issue in numpy-release 1.6.2 and numpy-dev Message-ID: Hi all, Installing numpy 1.6.2 against a Python interpreter built with the --without threads currently fails due to missing references to PyGILState_Ensure and PyGILState_Release. The references appear to be coming from the following code in nditer.c.src: NPY_NO_EXPORT void NpyIter_DebugPrint(NpyIter *iter) { // PyGILState_STATE gilstate = PyGILState_Ensure(); // PyGILState_Release(gilstate); } Since this is debugging code, I'm guessing it doesn't get called very frequently, and I could probably just #ifdef it out or use the NPY macros for grabbing the GIL for a non-threaded build: (NPY_ALLOW_C_API and NPY_DISABLE_C_API), but I don't understand why it's grabbing the GIL in the first place. Where is it calling into the interpreter? Does it need the GIL for something else? I'm hesitant to touch this code and issue a pull request until I understand what it's trying to do. Heading on over to the master branch at numpy/numpy, I'm starting to notice more unprotected PyGILState references creeping into the development code. Even the Python developers seem to think that nobody is using --without-threaded, so I'm not going to make a strong case for being more careful, but I do want to point it out in case you want to keep the numpy sources correct for this case. Thanks, Aron -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Aug 14 10:13:44 2012 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 14 Aug 2012 08:13:44 -0600 Subject: [Numpy-discussion] minor threads-related issue in numpy-release 1.6.2 and numpy-dev In-Reply-To: References: Message-ID: On Tue, Aug 14, 2012 at 7:47 AM, Aron Ahmadia wrote: > Hi all, > > Installing numpy 1.6.2 against a Python interpreter built with the > --without threads currently fails due to missing references > to PyGILState_Ensure and PyGILState_Release. The references appear to be > coming from the following code in nditer.c.src: > > NPY_NO_EXPORT void > NpyIter_DebugPrint(NpyIter *iter) > { > // > PyGILState_STATE gilstate = PyGILState_Ensure(); > // > PyGILState_Release(gilstate); > } > > Since this is debugging code, I'm guessing it doesn't get called very > frequently, and I could probably just #ifdef it out or use the NPY macros > for grabbing the GIL for a non-threaded build: (NPY_ALLOW_C_API > and NPY_DISABLE_C_API), but I don't understand why it's grabbing the GIL in > the first place. Where is it calling into the interpreter? Does it need > the GIL for something else? I'm hesitant to touch this code and issue a > pull request until I understand what it's trying to do. > > Heading on over to the master branch at numpy/numpy, I'm starting to > notice more unprotected PyGILState references creeping into the development > code. Even the Python developers seem to think that nobody is using > --without-threaded, so I'm not going to make a strong case for being more > careful, but I do want to point it out in case you want to keep the numpy > sources correct for this case. > > Some parts of the numpy code release the GIL, so it needs to be reacquired in order to call the Python C_API. A quick look shows a call to PyObject_Print in the function and there may be other such calls. The right fix is probably to use a couple of ifdefs if there is an easy way to determine the Python interpreter configuration ... WITH_THREAD seems to be the right flag. #ifdef WITH_THREAD PyAPI_FUNC(void) PyThreadState_DeleteCurrent(void); #endif Could you give that a shot? TIA, -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej.certik at gmail.com Tue Aug 14 10:22:17 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Tue, 14 Aug 2012 07:22:17 -0700 Subject: [Numpy-discussion] how to use numpy-vendor In-Reply-To: References: <502A295E.3020005@astro.uio.no> Message-ID: On Tue, Aug 14, 2012 at 6:32 AM, Fr?d?ric Bastien wrote: > On Tue, Aug 14, 2012 at 6:33 AM, Dag Sverre Seljebotn > wrote: >> On 08/14/2012 06:34 AM, Ond?ej ?ert?k wrote: >>> Hi, >>> >>> How should one use the "vendor" repository (https://github.com/numpy/vendor) >>> in Wine? Should I put the binaries into .wine/drive_c/Python25/libs/, >>> or somewhere else? >>> I've search all mailinglists and I didn't find any information on it. >>> I vaguely remember >>> that somebody mentioned it somewhere, but I am not able to find it. >>> Once I understand it, >>> I'll send a PR updating the README. >>> >>> >>> I've played with OpenBlas and managed to compile numpy with it on linux, >>> following the tutorial [1] and it works, so at least on linux it's clear to me. >> >> One thing to be aware of with OpenBlas is that it is *very* tuned to the >> CPU at hand. As in, every CPU has hand-coded *assembly* and the makefile >> more or less probes for which specific CPU generation from which vendor >> you have and compiles and link the corresponding assembly file. So you >> may have to take some care that you don't compile and ship a version >> that breaks if you don't have SSE3 installed etc... >> >> (Unless OpenBlas has changed recently. I'm not saying I'm right, I'm >> saying it should be looked into.) > > OpenBlas as the option to build all version and select at run time the > right one. MKL do the same. > > But I never tested it. So I don't know how well it work. The other > option would be to force an older CPU that support only sse2. ATLAS do > that by default. OpenBLAS select the best one for the computer where > it is being built by default. For the record, I don't plan to ship openblas, I was just playing with it to make sure I understand how things work. Ondrej From njs at pobox.com Tue Aug 14 10:23:59 2012 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 14 Aug 2012 15:23:59 +0100 Subject: [Numpy-discussion] minor threads-related issue in numpy-release 1.6.2 and numpy-dev In-Reply-To: References: Message-ID: On Tue, Aug 14, 2012 at 3:13 PM, Charles R Harris wrote: > > > On Tue, Aug 14, 2012 at 7:47 AM, Aron Ahmadia wrote: >> >> Hi all, >> >> Installing numpy 1.6.2 against a Python interpreter built with the >> --without threads currently fails due to missing references to >> PyGILState_Ensure and PyGILState_Release. The references appear to be >> coming from the following code in nditer.c.src: >> >> NPY_NO_EXPORT void >> NpyIter_DebugPrint(NpyIter *iter) >> { >> // >> PyGILState_STATE gilstate = PyGILState_Ensure(); >> // >> PyGILState_Release(gilstate); >> } >> >> Since this is debugging code, I'm guessing it doesn't get called very >> frequently, and I could probably just #ifdef it out or use the NPY macros >> for grabbing the GIL for a non-threaded build: (NPY_ALLOW_C_API and >> NPY_DISABLE_C_API), but I don't understand why it's grabbing the GIL in the >> first place. Where is it calling into the interpreter? Does it need the >> GIL for something else? I'm hesitant to touch this code and issue a pull >> request until I understand what it's trying to do. >> >> Heading on over to the master branch at numpy/numpy, I'm starting to >> notice more unprotected PyGILState references creeping into the development >> code. Even the Python developers seem to think that nobody is using >> --without-threaded, so I'm not going to make a strong case for being more >> careful, but I do want to point it out in case you want to keep the numpy >> sources correct for this case. >> > > Some parts of the numpy code release the GIL, so it needs to be reacquired > in order to call the Python C_API. A quick look shows a call to > PyObject_Print in the function and there may be other such calls. The right > fix is probably to use a couple of ifdefs if there is an easy way to > determine the Python interpreter configuration ... WITH_THREAD seems > to be the right flag. > > #ifdef WITH_THREAD > PyAPI_FUNC(void) PyThreadState_DeleteCurrent(void); > #endif > > Could you give that a shot? TIA, The NPY_*_THREADS macros appear to already check for WITH_THREAD automatically. I think we'd be happy to apply a patch that cleans up numpy's direct PyGIL_* calls to use the NPY_*_THREADS macros instead. -n From ondrej.certik at gmail.com Tue Aug 14 10:26:28 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Tue, 14 Aug 2012 07:26:28 -0700 Subject: [Numpy-discussion] how to use numpy-vendor In-Reply-To: References: Message-ID: On Tue, Aug 14, 2012 at 3:43 AM, David Cournapeau wrote: > On Tue, Aug 14, 2012 at 11:22 AM, Nathaniel Smith wrote: >> On Tue, Aug 14, 2012 at 11:06 AM, David Cournapeau wrote: >>> Hi Ondrej, >>> >>> On Tue, Aug 14, 2012 at 5:34 AM, Ond?ej ?ert?k wrote: >>>> Hi, >>>> >>>> How should one use the "vendor" repository (https://github.com/numpy/vendor) >>>> in Wine? Should I put the binaries into .wine/drive_c/Python25/libs/, >>>> or somewhere else? >>>> I've search all mailinglists and I didn't find any information on it. >>>> I vaguely remember >>>> that somebody mentioned it somewhere, but I am not able to find it. >>>> Once I understand it, >>>> I'll send a PR updating the README. >>> >>> There is no information on vendor: that's a repo I set up to avoid >>> polluting the main repo with all the binary stuff that used to be in >>> SVN. The principle is to put binaries used to *build* numpy, but we >>> don't put anything there for end-users. >>> >>> What binaries do you need to put there ? Numpy binaries are usually >>> put on sourceforge (although I would be more than happy to have a >>> suggestion for a better way because uploading on sourceforge is the >>> very definition of pain). >> >> I think he's asking how to use the binaries in numpy-vendor to build a >> release version of numpy. Yes. > > Hm, good point, I don't know why I read putting .wine stuff into > vendor instead of the opposite. > > Anyway, the way to use the binaries is to put them in some known > location, e.g. C:\local ($WINEPREFIX/drive_c/local for wine), and copy > the nosse/sse2/sse3 directories in there. For example: > > C:\local\lib\yop\nosse > C:\local\lib\yop\sse2 > ... > > This is then referred through env by the pavement script (see > https://github.com/numpy/numpy/blob/master/pavement.py#L143). Renaming > yop to atlas would be a good idea, don't know why I let that > non-descriptive name in there. I'll send a PR. Got it, thanks for your help. I'll also send a PR to the "vendor" repository, so that it's clear how to actually use it with NumPy. > > Manually, you can just do something like "ATLAS=C:\local\lib\yop\sse2 > python setup.py build", but being careful about how env variables are > passed between shell and wine (don't remember the details). Note that Right. > the nosse is not ATLAS, but straight netlib libs, which is why in that > case you need to use BLAS=... LAPACK=... > > I would strongly suggest not to use openblas for this release, because > of all the issues related to CPU tuning. We could certainly update a > bit what we have in there, but building windows binaries is big enough > of a pain, that you don't want to do everything at once I think, > especially testing/building blas on windows is very time consuming. Absolutely, I don't plan to use nor ship openblase. My apologies for the confusion. I was just using it on linux to understand how to make numpy use it (with the ATLAS, BLAS and LAPACK env variables). Ondrej From aron at ahmadia.net Tue Aug 14 10:27:09 2012 From: aron at ahmadia.net (Aron Ahmadia) Date: Tue, 14 Aug 2012 17:27:09 +0300 Subject: [Numpy-discussion] minor threads-related issue in numpy-release 1.6.2 and numpy-dev In-Reply-To: References: Message-ID: > Some parts of the numpy code release the GIL, so it needs to be reacquired > in order to call the Python C_API. A quick look shows a call to > PyObject_Print in the function and there may be other such calls. The right > fix is probably to use a couple of ifdefs if there is an easy way to > determine the Python interpreter configuration ... WITH_THREAD seems > to be the right flag. > I totally missed the PyObject_Print calls on read-through. Okay, this makes sense now. I'll put together a patch the next time I'm waiting on a build :) A -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej.certik at gmail.com Tue Aug 14 10:42:59 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Tue, 14 Aug 2012 07:42:59 -0700 Subject: [Numpy-discussion] how to use numpy-vendor In-Reply-To: References: Message-ID: On Tue, Aug 14, 2012 at 7:26 AM, Ond?ej ?ert?k wrote: > On Tue, Aug 14, 2012 at 3:43 AM, David Cournapeau wrote: [...] >> This is then referred through env by the pavement script (see >> https://github.com/numpy/numpy/blob/master/pavement.py#L143). Renaming >> yop to atlas would be a good idea, don't know why I let that >> non-descriptive name in there. > > I'll send a PR: https://github.com/numpy/numpy/pull/386 Ondrej From ralf.gommers at gmail.com Tue Aug 14 15:21:53 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 14 Aug 2012 21:21:53 +0200 Subject: [Numpy-discussion] A step toward merging odeint and ode In-Reply-To: <1344768110.26417.6.camel@amilo.coursju> References: <1344768110.26417.6.camel@amilo.coursju> Message-ID: On Sun, Aug 12, 2012 at 12:41 PM, Fabrice Silva wrote: > I made a pull request [1] to integrate the LSODA solver that is used in > odeint into the modular scipy.integrate.ode generic class. In a similar > way as for vode, it just wraps the already present lsoda.f file > (see .pyf file) and exposes it within an IntegratorBase subclass > adjusting the coefficients before calling lsoda. > Does that mean that odeint can be made a wrapper around lsoda and that the odepack static extension can be completely removed? Ralf > > Note that lsoda provide automatic switching between stiff and non-stiff > methods, feature that is not present in the available vode integrator. > > Final note: tests are ok! > > Regards, > > [1] https://github.com/scipy/scipy/pull/273 > -- > Fabrice Silva > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej.certik at gmail.com Tue Aug 14 16:02:11 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Tue, 14 Aug 2012 13:02:11 -0700 Subject: [Numpy-discussion] how to use numpy-vendor In-Reply-To: References: Message-ID: Hi, I've uploaded the binaries here: https://sourceforge.net/projects/numpy/files/NumPy/1.7.0beta/ The only thing that's missing are Mac binaries, otherwise everything else is there. Here is the full log from the build (you have to click on "View Raw" as the log is long): https://gist.github.com/3352057 What is the best way to test, that they indeed were built with the atlas from the "vendor" repository? The relevant part of the log says: [127.0.0.1:2222] out: lapack_info: [127.0.0.1:2222] out: FOUND: [127.0.0.1:2222] out: libraries = ['lapack'] [127.0.0.1:2222] out: library_dirs = ['C:\\local\\lib\\yop\\nosse'] [127.0.0.1:2222] out: language = f77 [127.0.0.1:2222] out: FOUND: [127.0.0.1:2222] out: libraries = ['lapack', 'blas'] [127.0.0.1:2222] out: library_dirs = ['C:\\local\\lib\\yop\\nosse'] [127.0.0.1:2222] out: define_macros = [('NO_ATLAS_INFO', 1)] [127.0.0.1:2222] out: language = f77 and [127.0.0.1:2222] out: FOUND: [127.0.0.1:2222] out: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] [127.0.0.1:2222] out: library_dirs = ['C:\\local\\lib\\yop\\sse2'] [127.0.0.1:2222] out: language = f77 [127.0.0.1:2222] out: define_macros = [('NO_ATLAS_INFO', -1)] and [127.0.0.1:2222] out: FOUND: [127.0.0.1:2222] out: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] [127.0.0.1:2222] out: library_dirs = ['C:\\local\\lib\\yop\\sse3'] [127.0.0.1:2222] out: language = f77 [127.0.0.1:2222] out: define_macros = [('NO_ATLAS_INFO', -1)] So that seems that it found it, correct? Ondrej From silva at lma.cnrs-mrs.fr Wed Aug 15 04:42:18 2012 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Wed, 15 Aug 2012 10:42:18 +0200 Subject: [Numpy-discussion] A step toward merging odeint and ode In-Reply-To: References: <1344768110.26417.6.camel@amilo.coursju> Message-ID: <1345020138.19573.11.camel@amilo.coursju> Le mardi 14 ao?t 2012 ? 21:21 +0200, Ralf Gommers a ?crit : > On Sun, Aug 12, 2012, Fabrice Silva wrote: > I made a pull request [1] to integrate the LSODA solver that > is used in odeint into the modular scipy.integrate.ode generic > class. In a similar way as for vode, it just wraps the already > present lsoda.f file (see .pyf file) and exposes it within an > IntegratorBase subclass adjusting the coefficients before > calling lsoda. > > Does that mean that odeint can be made a wrapper around lsoda and that > the odepack static extension can be completely removed? Hi Ralf, The pull request allows to run the integration using the object-oriented interface ode, with the same solver than the odeint interface uses, i.e. lsoda, extending the integrators available for the object-oriented interface. As I understand the scipy.integrate architecture, we are by now building: * the odepack library, which has all the fortran sources required by lsoda and vode at least. * the _odepack extension, which defines the _odepack module needed by odeint. This latter would be removable, and odeint a wrapper around the lsoda pyf'ed function. I suppose you are talking about the _odepack extension, am I wrong? -- Fabrice Silva From chuang.yi at bankofamerica.com Wed Aug 15 10:59:35 2012 From: chuang.yi at bankofamerica.com (Yi, Chuang) Date: Wed, 15 Aug 2012 10:59:35 -0400 Subject: [Numpy-discussion] QQ Plot in Python and Noncentral Chisquare Message-ID: <7FCA43B19DF30443BB8C836A13B1001C4CA61082@smtp_mail.bankofamerica.com> Hello, I am a new user of Python. I have a couple of questions that would appreciate your guidance! * QQ Plot in Python: I could not find any functions in either Numpy or Scipy to do QQ Plot of two vectors of data. For example, in Matlab, one could just call qqplot(X,Y), which will generate the QQ plot of vector X against vector Y. Would you please let me know which python package has similar functionality? * Noncentral Chisquare: it looks to me that in Numpy and Scipy, the noncentral chisquare distribution only allows Integer degree of freedom. For example, the following code will produce an error message. In R, however, it also allows non-integer degree of freedom. Would you please let me know which python package has similar functionality? * data = numpy.random.noncentral_chisquare(0.5, 2, 100) * ValueError: df <= 0 Thank you very much, Regards, Chuang ---------------------------------------------------------------------- This message w/attachments (message) is intended solely for the use of the intended recipient(s) and may contain information that is privileged, confidential or proprietary. If you are not an intended recipient, please notify the sender, and then please delete and destroy all copies and attachments, and be advised that any review or dissemination of, or the taking of any action in reliance on, the information contained in or attached to this message is prohibited. Unless specifically indicated, this message is not an offer to sell or a solicitation of any investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Sender. Subject to applicable law, Sender may intercept, monitor, review and retain e-communications (EC) traveling through its networks/systems and may produce any such EC to regulators, law enforcement, in litigation and as required by law. The laws of the country of each sender/recipient may impact the handling of EC, and EC may be archived, supervised and produced in countries other than the country in which you are located. This message cannot be guaranteed to be secure or free of errors or viruses. References to "Sender" are references to any subsidiary of Bank of America Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a Condition to Any Banking Service or Activity * Are Not Insured by Any Federal Government Agency. Attachments that are part of this EC may have additional important disclosures and disclaimers, which you should read. This message is subject to terms available at the following link: http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you consent to the foregoing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Aug 15 13:46:18 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 15 Aug 2012 19:46:18 +0200 Subject: [Numpy-discussion] QQ Plot in Python and Noncentral Chisquare In-Reply-To: <7FCA43B19DF30443BB8C836A13B1001C4CA61082@smtp_mail.bankofamerica.com> References: <7FCA43B19DF30443BB8C836A13B1001C4CA61082@smtp_mail.bankofamerica.com> Message-ID: On Wed, Aug 15, 2012 at 4:59 PM, Yi, Chuang wrote: > Hello,**** > > ** ** > > I am a new user of Python. I have a couple of questions that would > appreciate your guidance!**** > > ** ** > > - QQ Plot in Python: I could not find any functions in either Numpy or > Scipy to do QQ Plot of two vectors of data. For example, in Matlab, one > could just call qqplot(X,Y), which will generate the QQ plot of vector X > against vector Y. Would you please let me know which python package has > similar functionality? > > statsmodels: http://statsmodels.sourceforge.net/devel/generated/statsmodels.graphics.gofplots.qqplot.html Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Aug 15 14:54:18 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 15 Aug 2012 20:54:18 +0200 Subject: [Numpy-discussion] A step toward merging odeint and ode In-Reply-To: <1345020138.19573.11.camel@amilo.coursju> References: <1344768110.26417.6.camel@amilo.coursju> <1345020138.19573.11.camel@amilo.coursju> Message-ID: On Wed, Aug 15, 2012 at 10:42 AM, Fabrice Silva wrote: > Le mardi 14 ao?t 2012 ? 21:21 +0200, Ralf Gommers a ?crit : > > On Sun, Aug 12, 2012, Fabrice Silva wrote: > > I made a pull request [1] to integrate the LSODA solver that > > is used in odeint into the modular scipy.integrate.ode generic > > class. In a similar way as for vode, it just wraps the already > > present lsoda.f file (see .pyf file) and exposes it within an > > IntegratorBase subclass adjusting the coefficients before > > calling lsoda. > > > > Does that mean that odeint can be made a wrapper around lsoda and that > > the odepack static extension can be completely removed? > > Hi Ralf, > The pull request allows to run the integration using the object-oriented > interface ode, with the same solver than the odeint interface uses, i.e. > lsoda, extending the integrators available for the object-oriented > interface. > > As I understand the scipy.integrate architecture, we are by now > building: > * the odepack library, which has all the fortran sources required by > lsoda and vode at least. > * the _odepack extension, which defines the _odepack module needed by > odeint. > > This latter would be removable, and odeint a wrapper around the lsoda > pyf'ed function. I suppose you are talking about the _odepack extension, > am I wrong? > I was mixing it up a bit, but yes: the _odepack extension and the C source for it. Not necessary to do that at once I guess, but wrapping the same function twice is once too many. And forgot in my first email: nice PR, looks good to me. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Aug 15 21:59:12 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 15 Aug 2012 21:59:12 -0400 Subject: [Numpy-discussion] QQ Plot in Python and Noncentral Chisquare In-Reply-To: <7FCA43B19DF30443BB8C836A13B1001C4CA61082@smtp_mail.bankofamerica.com> References: <7FCA43B19DF30443BB8C836A13B1001C4CA61082@smtp_mail.bankofamerica.com> Message-ID: On Wed, Aug 15, 2012 at 10:59 AM, Yi, Chuang wrote: > Hello, > > > > I am a new user of Python. I have a couple of questions that would > appreciate your guidance! > > > > QQ Plot in Python: I could not find any functions in either Numpy or Scipy > to do QQ Plot of two vectors of data. For example, in Matlab, one could just > call qqplot(X,Y), which will generate the QQ plot of vector X against vector > Y. Would you please let me know which python package has similar > functionality? > Noncentral Chisquare: it looks to me that in Numpy and Scipy, the noncentral > chisquare distribution only allows Integer degree of freedom. For example, > the following code will produce an error message. In R, however, it also > allows non-integer degree of freedom. Would you please let me know which > python package has similar functionality? > > data = numpy.random.noncentral_chisquare(0.5, 2, 100) > ValueError: df <= 0 restriction is >1 not integer >>> np.random.noncentral_chisquare(1, 2, 100) Traceback (most recent call last): File "", line 1, in File "mtrand.pyx", line 1957, in mtrand.RandomState.noncentral_chisquare (numpy\random\mtrand\mtrand.c:9847) ValueError: df <= 0 >>> np.random.noncentral_chisquare(1.0001, 2, 100) array([ 5.75083221, 1.08050491, 0.69267684, 1.37806056, 2.39899 , 0.31415666, 2.9202386 I don't know if the >1 restriction is justified or could be dropped. Josef > > > > Thank you very much, > > > > Regards, > > Chuang > > ________________________________ > This message w/attachments (message) is intended solely for the use of the > intended recipient(s) and may contain information that is privileged, > confidential or proprietary. If you are not an intended recipient, please > notify the sender, and then please delete and destroy all copies and > attachments, and be advised that any review or dissemination of, or the > taking of any action in reliance on, the information contained in or > attached to this message is prohibited. > Unless specifically indicated, this message is not an offer to sell or a > solicitation of any investment products or other financial product or > service, an official confirmation of any transaction, or an official > statement of Sender. Subject to applicable law, Sender may intercept, > monitor, review and retain e-communications (EC) traveling through its > networks/systems and may produce any such EC to regulators, law enforcement, > in litigation and as required by law. > The laws of the country of each sender/recipient may impact the handling of > EC, and EC may be archived, supervised and produced in countries other than > the country in which you are located. This message cannot be guaranteed to > be secure or free of errors or viruses. > > References to "Sender" are references to any subsidiary of Bank of America > Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are > Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a > Condition to Any Banking Service or Activity * Are Not Insured by Any > Federal Government Agency. Attachments that are part of this EC may have > additional important disclosures and disclaimers, which you should read. > This message is subject to terms available at the following link: > http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you > consent to the foregoing. > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From josef.pktd at gmail.com Wed Aug 15 22:01:48 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 15 Aug 2012 22:01:48 -0400 Subject: [Numpy-discussion] QQ Plot in Python and Noncentral Chisquare In-Reply-To: References: <7FCA43B19DF30443BB8C836A13B1001C4CA61082@smtp_mail.bankofamerica.com> Message-ID: On Wed, Aug 15, 2012 at 1:46 PM, Ralf Gommers wrote: > > > On Wed, Aug 15, 2012 at 4:59 PM, Yi, Chuang > wrote: >> >> Hello, >> >> >> >> I am a new user of Python. I have a couple of questions that would >> appreciate your guidance! >> >> >> >> QQ Plot in Python: I could not find any functions in either Numpy or Scipy >> to do QQ Plot of two vectors of data. For example, in Matlab, one could just >> call qqplot(X,Y), which will generate the QQ plot of vector X against vector >> Y. Would you please let me know which python package has similar >> functionality? > > > statsmodels: > http://statsmodels.sourceforge.net/devel/generated/statsmodels.graphics.gofplots.qqplot.html 1sample version only, I don't know of any 2sample version in python, but maybe a quick Pull Request could add it to statsmodels. Josef > > Ralf > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From silva at lma.cnrs-mrs.fr Thu Aug 16 06:46:42 2012 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Thu, 16 Aug 2012 12:46:42 +0200 Subject: [Numpy-discussion] A step toward merging odeint and ode In-Reply-To: References: <1344768110.26417.6.camel@amilo.coursju> <1345020138.19573.11.camel@amilo.coursju> Message-ID: <1345114002.3855.5.camel@amilo.coursju> Le mercredi 15 ao?t 2012 ? 20:54 +0200, Ralf Gommers a ?crit : > I was mixing it up a bit, but yes: the _odepack extension and the C > source for it. Not necessary to do that at once I guess, but wrapping > the same function twice is once too many. > > And forgot in my first email: nice PR, looks good to me. OK then, you can found two commits : the first one removes the _odepack extension (and the relative multipack.h, __odepack.h and _odepackmodule.c), replacing it by Python counterparts in the odeint function itself. https://github.com/FabricioS/scipy/commit/02e8a4856f29f4ad438fef2c86a41b266d6a9e6c the second one suggests reverting callback arguments convention: ydot = f(y,t,..) to ode's one: ydot = f(t,y,..) This ones would raise backward compatibility issues but align ordering to the convention defined in the LLNL when designing the ODEPACK. https://github.com/FabricioS/scipy/commit/f867f2b8133d3f6ea47d449bd760a77a7c90394e -- Fabrice Silva From chuang.yi at bankofamerica.com Thu Aug 16 09:05:22 2012 From: chuang.yi at bankofamerica.com (Yi, Chuang) Date: Thu, 16 Aug 2012 09:05:22 -0400 Subject: [Numpy-discussion] QQ Plot in Python and Noncentral Chisquare In-Reply-To: References: <7FCA43B19DF30443BB8C836A13B1001C4CA61082@smtp_mail.bankofamerica.com> Message-ID: <7FCA43B19DF30443BB8C836A13B1001C4CAACB35@smtp_mail.bankofamerica.com> Thank you Josef, df can be less than 1 as long as it is nonnegative: http://stat.ethz.ch/R-manual/R-patched/library/stats/html/Chisquare.html Here is an example in R: > Z0<-rchisq(2,df=0.5,ncp =2) > Z0 [1] 0.5056454 2.0427540 Thanks, Chuang -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of josef.pktd at gmail.com Sent: Wednesday, August 15, 2012 9:59 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] QQ Plot in Python and Noncentral Chisquare On Wed, Aug 15, 2012 at 10:59 AM, Yi, Chuang wrote: > Hello, > > > > I am a new user of Python. I have a couple of questions that would > appreciate your guidance! > > > > QQ Plot in Python: I could not find any functions in either Numpy or Scipy > to do QQ Plot of two vectors of data. For example, in Matlab, one could just > call qqplot(X,Y), which will generate the QQ plot of vector X against vector > Y. Would you please let me know which python package has similar > functionality? > Noncentral Chisquare: it looks to me that in Numpy and Scipy, the noncentral > chisquare distribution only allows Integer degree of freedom. For example, > the following code will produce an error message. In R, however, it also > allows non-integer degree of freedom. Would you please let me know which > python package has similar functionality? > > data = numpy.random.noncentral_chisquare(0.5, 2, 100) > ValueError: df <= 0 restriction is >1 not integer >>> np.random.noncentral_chisquare(1, 2, 100) Traceback (most recent call last): File "", line 1, in File "mtrand.pyx", line 1957, in mtrand.RandomState.noncentral_chisquare (numpy\random\mtrand\mtrand.c:9847) ValueError: df <= 0 >>> np.random.noncentral_chisquare(1.0001, 2, 100) array([ 5.75083221, 1.08050491, 0.69267684, 1.37806056, 2.39899 , 0.31415666, 2.9202386 I don't know if the >1 restriction is justified or could be dropped. Josef > > > > Thank you very much, > > > > Regards, > > Chuang > > ________________________________ > This message w/attachments (message) is intended solely for the use of the > intended recipient(s) and may contain information that is privileged, > confidential or proprietary. If you are not an intended recipient, please > notify the sender, and then please delete and destroy all copies and > attachments, and be advised that any review or dissemination of, or the > taking of any action in reliance on, the information contained in or > attached to this message is prohibited. > Unless specifically indicated, this message is not an offer to sell or a > solicitation of any investment products or other financial product or > service, an official confirmation of any transaction, or an official > statement of Sender. Subject to applicable law, Sender may intercept, > monitor, review and retain e-communications (EC) traveling through its > networks/systems and may produce any such EC to regulators, law enforcement, > in litigation and as required by law. > The laws of the country of each sender/recipient may impact the handling of > EC, and EC may be archived, supervised and produced in countries other than > the country in which you are located. This message cannot be guaranteed to > be secure or free of errors or viruses. > > References to "Sender" are references to any subsidiary of Bank of America > Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are > Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a > Condition to Any Banking Service or Activity * Are Not Insured by Any > Federal Government Agency. Attachments that are part of this EC may have > additional important disclosures and disclaimers, which you should read. > This message is subject to terms available at the following link: > http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you > consent to the foregoing. > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ---------------------------------------------------------------------- This message w/attachments (message) is intended solely for the use of the intended recipient(s) and may contain information that is privileged, confidential or proprietary. If you are not an intended recipient, please notify the sender, and then please delete and destroy all copies and attachments, and be advised that any review or dissemination of, or the taking of any action in reliance on, the information contained in or attached to this message is prohibited. Unless specifically indicated, this message is not an offer to sell or a solicitation of any investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Sender. Subject to applicable law, Sender may intercept, monitor, review and retain e-communications (EC) traveling through its networks/systems and may produce any such EC to regulators, law enforcement, in litigation and as required by law. The laws of the country of each sender/recipient may impact the handling of EC, and EC may be archived, supervised and produced in countries other than the country in which you are located. This message cannot be guaranteed to be secure or free of errors or viruses. References to "Sender" are references to any subsidiary of Bank of America Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a Condition to Any Banking Service or Activity * Are Not Insured by Any Federal Government Agency. Attachments that are part of this EC may have additional important disclosures and disclaimers, which you should read. This message is subject to terms available at the following link: http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you consent to the foregoing. From josef.pktd at gmail.com Thu Aug 16 09:55:08 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 16 Aug 2012 09:55:08 -0400 Subject: [Numpy-discussion] QQ Plot in Python and Noncentral Chisquare In-Reply-To: <7FCA43B19DF30443BB8C836A13B1001C4CAACB35@smtp_mail.bankofamerica.com> References: <7FCA43B19DF30443BB8C836A13B1001C4CA61082@smtp_mail.bankofamerica.com> <7FCA43B19DF30443BB8C836A13B1001C4CAACB35@smtp_mail.bankofamerica.com> Message-ID: On Thu, Aug 16, 2012 at 9:05 AM, Yi, Chuang wrote: > Thank you Josef, df can be less than 1 as long as it is nonnegative: > > http://stat.ethz.ch/R-manual/R-patched/library/stats/html/Chisquare.html > > Here is an example in R: > >> Z0<-rchisq(2,df=0.5,ncp =2) >> Z0 > [1] 0.5056454 2.0427540 > > Thanks, Chuang > > -----Original Message----- > From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of josef.pktd at gmail.com > Sent: Wednesday, August 15, 2012 9:59 PM > To: Discussion of Numerical Python > Subject: Re: [Numpy-discussion] QQ Plot in Python and Noncentral Chisquare > > On Wed, Aug 15, 2012 at 10:59 AM, Yi, Chuang > wrote: >> Hello, >> >> >> >> I am a new user of Python. I have a couple of questions that would >> appreciate your guidance! >> >> >> >> QQ Plot in Python: I could not find any functions in either Numpy or Scipy >> to do QQ Plot of two vectors of data. For example, in Matlab, one could just >> call qqplot(X,Y), which will generate the QQ plot of vector X against vector >> Y. Would you please let me know which python package has similar >> functionality? >> Noncentral Chisquare: it looks to me that in Numpy and Scipy, the noncentral >> chisquare distribution only allows Integer degree of freedom. For example, >> the following code will produce an error message. In R, however, it also >> allows non-integer degree of freedom. Would you please let me know which >> python package has similar functionality? >> >> data = numpy.random.noncentral_chisquare(0.5, 2, 100) >> ValueError: df <= 0 > > restriction is >1 not integer > >>>> np.random.noncentral_chisquare(1, 2, 100) > Traceback (most recent call last): > File "", line 1, in > File "mtrand.pyx", line 1957, in > mtrand.RandomState.noncentral_chisquare > (numpy\random\mtrand\mtrand.c:9847) > ValueError: df <= 0 > >>>> np.random.noncentral_chisquare(1.0001, 2, 100) > array([ 5.75083221, 1.08050491, 0.69267684, 1.37806056, > 2.39899 , 0.31415666, 2.9202386 > > > I don't know if the >1 restriction is justified or could be dropped. A quick look at numpy\random\mtrand\distributions.c : double rk_noncentral_chisquare(rk_state *state, double df, double nonc) { double Chi2, N; Chi2 = rk_chisquare(state, df-1); N = rk_gauss(state) + sqrt(nonc); return Chi2 + N*N; } It uses df-1 for chisquare, which means df>1 for non-central is required here. So there needs to be another way of generating non-central chisquare for 0 > Josef > >> >> >> >> Thank you very much, >> >> >> >> Regards, >> >> Chuang >> >> ________________________________ >> This message w/attachments (message) is intended solely for the use of the >> intended recipient(s) and may contain information that is privileged, >> confidential or proprietary. If you are not an intended recipient, please >> notify the sender, and then please delete and destroy all copies and >> attachments, and be advised that any review or dissemination of, or the >> taking of any action in reliance on, the information contained in or >> attached to this message is prohibited. >> Unless specifically indicated, this message is not an offer to sell or a >> solicitation of any investment products or other financial product or >> service, an official confirmation of any transaction, or an official >> statement of Sender. Subject to applicable law, Sender may intercept, >> monitor, review and retain e-communications (EC) traveling through its >> networks/systems and may produce any such EC to regulators, law enforcement, >> in litigation and as required by law. >> The laws of the country of each sender/recipient may impact the handling of >> EC, and EC may be archived, supervised and produced in countries other than >> the country in which you are located. This message cannot be guaranteed to >> be secure or free of errors or viruses. >> >> References to "Sender" are references to any subsidiary of Bank of America >> Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are >> Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a >> Condition to Any Banking Service or Activity * Are Not Insured by Any >> Federal Government Agency. Attachments that are part of this EC may have >> additional important disclosures and disclaimers, which you should read. >> This message is subject to terms available at the following link: >> http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you >> consent to the foregoing. >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > ---------------------------------------------------------------------- > This message w/attachments (message) is intended solely for the use of the intended recipient(s) and may contain information that is privileged, confidential or proprietary. If you are not an intended recipient, please notify the sender, and then please delete and destroy all copies and attachments, and be advised that any review or dissemination of, or the taking of any action in reliance on, the information contained in or attached to this message is prohibited. > Unless specifically indicated, this message is not an offer to sell or a solicitation of any investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Sender. Subject to applicable law, Sender may intercept, monitor, review and retain e-communications (EC) traveling through its networks/systems and may produce any such EC to regulators, law enforcement, in litigation and as required by law. > The laws of the country of each sender/recipient may impact the handling of EC, and EC may be archived, supervised and produced in countries other than the country in which you are located. This message cannot be guaranteed to be secure or free of errors or viruses. > > References to "Sender" are references to any subsidiary of Bank of America Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a Condition to Any Banking Service or Activity * Are Not Insured by Any Federal Government Agency. Attachments that are part of this EC may have additional important disclosures and disclaimers, which you should read. This message is subject to terms available at the following link: > http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you consent to the foregoing. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From chuang.yi at bankofamerica.com Thu Aug 16 09:58:33 2012 From: chuang.yi at bankofamerica.com (Yi, Chuang) Date: Thu, 16 Aug 2012 09:58:33 -0400 Subject: [Numpy-discussion] QQ Plot in Python and Noncentral Chisquare In-Reply-To: References: <7FCA43B19DF30443BB8C836A13B1001C4CA61082@smtp_mail.bankofamerica.com> <7FCA43B19DF30443BB8C836A13B1001C4CAACB35@smtp_mail.bankofamerica.com> Message-ID: <7FCA43B19DF30443BB8C836A13B1001C4CAACC3C@smtp_mail.bankofamerica.com> -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of josef.pktd at gmail.com Sent: Thursday, August 16, 2012 9:55 AM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] QQ Plot in Python and Noncentral Chisquare On Thu, Aug 16, 2012 at 9:05 AM, Yi, Chuang wrote: > Thank you Josef, df can be less than 1 as long as it is nonnegative: > > http://stat.ethz.ch/R-manual/R-patched/library/stats/html/Chisquare.html > > Here is an example in R: > >> Z0<-rchisq(2,df=0.5,ncp =2) >> Z0 > [1] 0.5056454 2.0427540 > > Thanks, Chuang > > -----Original Message----- > From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of josef.pktd at gmail.com > Sent: Wednesday, August 15, 2012 9:59 PM > To: Discussion of Numerical Python > Subject: Re: [Numpy-discussion] QQ Plot in Python and Noncentral Chisquare > > On Wed, Aug 15, 2012 at 10:59 AM, Yi, Chuang > wrote: >> Hello, >> >> >> >> I am a new user of Python. I have a couple of questions that would >> appreciate your guidance! >> >> >> >> QQ Plot in Python: I could not find any functions in either Numpy or Scipy >> to do QQ Plot of two vectors of data. For example, in Matlab, one could just >> call qqplot(X,Y), which will generate the QQ plot of vector X against vector >> Y. Would you please let me know which python package has similar >> functionality? >> Noncentral Chisquare: it looks to me that in Numpy and Scipy, the noncentral >> chisquare distribution only allows Integer degree of freedom. For example, >> the following code will produce an error message. In R, however, it also >> allows non-integer degree of freedom. Would you please let me know which >> python package has similar functionality? >> >> data = numpy.random.noncentral_chisquare(0.5, 2, 100) >> ValueError: df <= 0 > > restriction is >1 not integer > >>>> np.random.noncentral_chisquare(1, 2, 100) > Traceback (most recent call last): > File "", line 1, in > File "mtrand.pyx", line 1957, in > mtrand.RandomState.noncentral_chisquare > (numpy\random\mtrand\mtrand.c:9847) > ValueError: df <= 0 > >>>> np.random.noncentral_chisquare(1.0001, 2, 100) > array([ 5.75083221, 1.08050491, 0.69267684, 1.37806056, > 2.39899 , 0.31415666, 2.9202386 > > > I don't know if the >1 restriction is justified or could be dropped. A quick look at numpy\random\mtrand\distributions.c : double rk_noncentral_chisquare(rk_state *state, double df, double nonc) { double Chi2, N; Chi2 = rk_chisquare(state, df-1); N = rk_gauss(state) + sqrt(nonc); return Chi2 + N*N; } It uses df-1 for chisquare, which means df>1 for non-central is required here. So there needs to be another way of generating non-central chisquare for 0 > Josef > >> >> >> >> Thank you very much, >> >> >> >> Regards, >> >> Chuang >> >> ________________________________ >> This message w/attachments (message) is intended solely for the use of the >> intended recipient(s) and may contain information that is privileged, >> confidential or proprietary. If you are not an intended recipient, please >> notify the sender, and then please delete and destroy all copies and >> attachments, and be advised that any review or dissemination of, or the >> taking of any action in reliance on, the information contained in or >> attached to this message is prohibited. >> Unless specifically indicated, this message is not an offer to sell or a >> solicitation of any investment products or other financial product or >> service, an official confirmation of any transaction, or an official >> statement of Sender. Subject to applicable law, Sender may intercept, >> monitor, review and retain e-communications (EC) traveling through its >> networks/systems and may produce any such EC to regulators, law enforcement, >> in litigation and as required by law. >> The laws of the country of each sender/recipient may impact the handling of >> EC, and EC may be archived, supervised and produced in countries other than >> the country in which you are located. This message cannot be guaranteed to >> be secure or free of errors or viruses. >> >> References to "Sender" are references to any subsidiary of Bank of America >> Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are >> Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a >> Condition to Any Banking Service or Activity * Are Not Insured by Any >> Federal Government Agency. Attachments that are part of this EC may have >> additional important disclosures and disclaimers, which you should read. >> This message is subject to terms available at the following link: >> http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you >> consent to the foregoing. >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > ---------------------------------------------------------------------- > This message w/attachments (message) is intended solely for the use of the intended recipient(s) and may contain information that is privileged, confidential or proprietary. If you are not an intended recipient, please notify the sender, and then please delete and destroy all copies and attachments, and be advised that any review or dissemination of, or the taking of any action in reliance on, the information contained in or attached to this message is prohibited. > Unless specifically indicated, this message is not an offer to sell or a solicitation of any investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Sender. Subject to applicable law, Sender may intercept, monitor, review and retain e-communications (EC) traveling through its networks/systems and may produce any such EC to regulators, law enforcement, in litigation and as required by law. > The laws of the country of each sender/recipient may impact the handling of EC, and EC may be archived, supervised and produced in countries other than the country in which you are located. This message cannot be guaranteed to be secure or free of errors or viruses. > > References to "Sender" are references to any subsidiary of Bank of America Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a Condition to Any Banking Service or Activity * Are Not Insured by Any Federal Government Agency. Attachments that are part of this EC may have additional important disclosures and disclaimers, which you should read. This message is subject to terms available at the following link: > http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you consent to the foregoing. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ---------------------------------------------------------------------- This message w/attachments (message) is intended solely for the use of the intended recipient(s) and may contain information that is privileged, confidential or proprietary. If you are not an intended recipient, please notify the sender, and then please delete and destroy all copies and attachments, and be advised that any review or dissemination of, or the taking of any action in reliance on, the information contained in or attached to this message is prohibited. Unless specifically indicated, this message is not an offer to sell or a solicitation of any investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Sender. Subject to applicable law, Sender may intercept, monitor, review and retain e-communications (EC) traveling through its networks/systems and may produce any such EC to regulators, law enforcement, in litigation and as required by law. The laws of the country of each sender/recipient may impact the handling of EC, and EC may be archived, supervised and produced in countries other than the country in which you are located. This message cannot be guaranteed to be secure or free of errors or viruses. References to "Sender" are references to any subsidiary of Bank of America Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a Condition to Any Banking Service or Activity * Are Not Insured by Any Federal Government Agency. Attachments that are part of this EC may have additional important disclosures and disclaimers, which you should read. This message is subject to terms available at the following link: http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you consent to the foregoing. From jgomezdans at gmail.com Thu Aug 16 13:46:41 2012 From: jgomezdans at gmail.com (Jose Gomez-Dans) Date: Thu, 16 Aug 2012 18:46:41 +0100 Subject: [Numpy-discussion] Multidimensional neighbours Message-ID: Hi, I've just come across Travis Oliphant's array version of the game of life here . It's a really nice example of how to efficiently find neighbours using numpy and family. However, I wanted to apply these ideas to higher order arrays (eg GRID being three dimensional rather than two dimensional). My previous attempts at this include the (sorry for the ensuing horror!) neighbours = np.array( [ \ # Lower row... x[:-2, :-2, :-2], x[:-2, :-2, 1:-1], x[:-2, :-2, 2: ], x[:-2, 1:-1,:-2], x[:-2, 1:-1, 1:-1], x[:-2, 1:-1, 2:], x[:-2, 2:,:-2], x[:-2, 2:, 1:-1], x[:-2, 2:, 2:], # Middle row x[1:-1, :-2, :-2], x[1:-1, :-2, 1:-1], x[1:-1, :-2, 2: ], x[1:-1, 1:-1,:-2], x[1:-1, 1:-1, 2:], x[1:-1, 2:,:-2], x[1:-1, 2:, 1:-1], x[1:-1, 2:, 2:], # Top row x[2:, :-2, :-2], x[2:, :-2, 1:-1], x[2:, :-2, 2: ], x[2:, 1:-1,:-2], x[2:, 1:-1, 1:-1], x[2:, 1:-1, 2:], x[2:, 2:,:-2], x[2:, 2:, 1:-1], x[2:, 2:, 2:] ] ) I think this works (it's been a while! ;D), but it's a bit messy and the boundary condition is a bit crude. I was wondering whether there's some other nice way of doing it, like demonstrated in the above link. Also, what if you want to have a larger neighbourhood (say 5x5x2 instead of 3x3x3)? While I appreciate these index tricks, I also find them quite mind boggling! Thanks! Jose -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Thu Aug 16 14:38:00 2012 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Thu, 16 Aug 2012 20:38:00 +0200 Subject: [Numpy-discussion] Multidimensional neighbours In-Reply-To: References: Message-ID: <1345142280.10247.10.camel@sebastian-laptop> Hello, Just to throw it in, if you do not mind useing scipy, you can use its multidimensional correlate method instead: stamp = np.ones((3,3,3)) stamp[1,1,1] = 0 num_neighbours = scipy.ndimage.correlate(x, stamp, mode='wrap')) In the link np.roll is used to implement periodic boundaries (mode='wrap' when using correlate), if you want constant boundary conditions padding is probably simplest. Maybe someone else has some trick to avoid so much code duplication without using scipy... Regards, Sebastian On Do, 2012-08-16 at 18:46 +0100, Jose Gomez-Dans wrote: > Hi, > I've just come across Travis Oliphant's array version of the game of > life here . It's a really nice > example of how to efficiently find neighbours using numpy and family. > However, I wanted to apply these ideas to higher order arrays (eg GRID > being three dimensional rather than two dimensional). My previous > attempts at this include the (sorry for the ensuing horror!) > neighbours = np.array( [ \ > # Lower row... > x[:-2, :-2, :-2], x[:-2, :-2, 1:-1], x[:-2, :-2, 2: ], > x[:-2, 1:-1,:-2], x[:-2, 1:-1, 1:-1], x[:-2, 1:-1, 2:], > x[:-2, 2:,:-2], x[:-2, 2:, 1:-1], x[:-2, 2:, 2:], > # Middle row > x[1:-1, :-2, :-2], x[1:-1, :-2, 1:-1], x[1:-1, :-2, 2: ], > x[1:-1, 1:-1,:-2], x[1:-1, 1:-1, 2:], > x[1:-1, 2:,:-2], x[1:-1, 2:, 1:-1], x[1:-1, 2:, 2:], > # Top row > x[2:, :-2, :-2], x[2:, :-2, 1:-1], x[2:, :-2, 2: ], > x[2:, 1:-1,:-2], x[2:, 1:-1, 1:-1], x[2:, 1:-1, 2:], > x[2:, 2:,:-2], x[2:, 2:, 1:-1], x[2:, 2:, 2:] ] ) > > > I think this works (it's been a while! ;D), but it's a bit messy and > the boundary condition is a bit crude. I was wondering whether there's > some other nice way of doing it, like demonstrated in the above link. > Also, what if you want to have a larger neighbourhood (say 5x5x2 > instead of 3x3x3)? > > > While I appreciate these index tricks, I also find them quite mind > boggling! > Thanks! > Jose > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From daniel.wheeler2 at gmail.com Thu Aug 16 15:10:43 2012 From: daniel.wheeler2 at gmail.com (Daniel Wheeler) Date: Thu, 16 Aug 2012 15:10:43 -0400 Subject: [Numpy-discussion] ANN: FiPy 3.0 In-Reply-To: <18B2EC1A-D7BA-4598-B50E-152A0F405B55@nist.gov> References: <18B2EC1A-D7BA-4598-B50E-152A0F405B55@nist.gov> Message-ID: We are pleased to announce the release of FiPy 3.0. http://www.ctcms.nist.gov/fipy The bump in major version number reflects more on the substantial increase in capabilities and ease of use than it does on a break in compatibility with FiPy 2.x. Few, if any, changes to your existing scripts should be necessary. The significant changes since version 2.1 are: ? Coupled and vector equations are now supported. ? A more robust mechanism for specifying boundary conditions is now used. ? Most Meshes can be partitioned by meshing with Gmsh. ? PyAMG and SciPy have been added to the solvers. ? FiPy is capable of running under Python 3. ? ?getter? and ?setter? methods have been pervasively changed to Python properties. ? The test suite now runs much faster. ? Tests can now be run on a full install using fipy.test(). This release addresses 66 tickets. ======================================================================== FiPy is an object oriented, partial differential equation (PDE) solver, written in Python, based on a standard finite volume (FV) approach. The framework has been developed in the Metallurgy Division and Center for Theoretical and Computational Materials Science (CTCMS), in the Material Measurement Laboratory (MML) at the National Institute of Standards and Technology (NIST). The solution of coupled sets of PDEs is ubiquitous to the numerical simulation of science problems. Numerous PDE solvers exist, using a variety of languages and numerical approaches. Many are proprietary, expensive and difficult to customize. As a result, scientists spend considerable resources repeatedly developing limited tools for specific problems. Our approach, combining the FV method and Python, provides a tool that is extensible, powerful and freely available. A significant advantage to Python is the existing suite of tools for array calculations, sparse matrices and data rendering. The FiPy framework includes terms for transient diffusion, convection and standard sources, enabling the solution of arbitrary combinations of coupled elliptic, hyperbolic and parabolic PDEs. Currently implemented models include phase field treatments of polycrystalline, dendritic, and electrochemical phase transformations as well as a level set treatment of the electrodeposition process. -- Daniel Wheeler -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Thu Aug 16 15:27:25 2012 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Thu, 16 Aug 2012 21:27:25 +0200 Subject: [Numpy-discussion] Correlated distributions (?) Message-ID: Hi All, once again, my apologies for a (possibly) very ignorant question, my google-fu is failing me... also because I am not sure of what exactly I should look for. My problem is relatively simple. Let's assume I have two Python objects, A and B, and one of their attributes can assume a value of "True" or "False" depending on the results of a uniform random distribution sample, i.e.: probability_A = 0.95 probability_B = 0.86 A.has_failed = False B.has_failed = False if numpy.random.random() < probability_A: A.has_failed = True if numpy.random.random() < probability_B: B.has_failed = True Now, I know that there is a correlation factor between the failing/not failing of A and the failing/not failing of B. Specifically, If A fails, then B should have 80% more chance of failing, but I have been banging my head to find out how I should modify the "probability_B" number (or the extremes of the uniform distribution, if that makes sense) in order to reflect that correlation. I have been looking at correlated distributions, but it appears that most of the results I have found relate to normal distributions, there is very little about non-normal (and especially uniform) distributions. It's also very likely that I am not looking in the right direction, so I would appreciate any suggestion you may share. Thank you in advance. Andrea. "Imagination Is The Only Weapon In The War Against Reality." http://xoomer.alice.it/infinity77/ From tim at cerazone.net Thu Aug 16 15:41:09 2012 From: tim at cerazone.net (Cera, Tim) Date: Thu, 16 Aug 2012 15:41:09 -0400 Subject: [Numpy-discussion] Multidimensional neighbours In-Reply-To: References: Message-ID: I have a pull request for a neighborhood function at https://github.com/numpy/numpy/pull/303 . I think IMHO it handles these problems quite handily. It does rely on my pad routine that is in Numpy 1.7, so you would need to get the 1.7 beta installed or install the development branch. For your example you would just create a weight array, and a function that returns a scalar value from the collected neighborhood values. Untested, but workflow is something like: >>> inputarr = np.random.random(9*9*9) >>> inputarr = inputarr.reshape((9,9,9)) >>> weight = np.ones((3,3,3)) >>> ans = neighbor(inputarr, weight, np.mean, pad = None) In place of 'np.mean' you can define your own function - game of life function for example. The PR has not had much activity, so if you can review/comment/program that would be appreciated. Kindest regards, Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Aug 16 16:58:13 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 16 Aug 2012 16:58:13 -0400 Subject: [Numpy-discussion] Correlated distributions (?) In-Reply-To: References: Message-ID: On Thu, Aug 16, 2012 at 3:27 PM, Andrea Gavana wrote: > Hi All, > > once again, my apologies for a (possibly) very ignorant question, > my google-fu is failing me... also because I am not sure of what > exactly I should look for. > > My problem is relatively simple. Let's assume I have two Python > objects, A and B, and one of their attributes can assume a value of > "True" or "False" depending on the results of a uniform random > distribution sample, i.e.: > > probability_A = 0.95 > probability_B = 0.86 > > A.has_failed = False > B.has_failed = False > > if numpy.random.random() < probability_A: > A.has_failed = True > > if numpy.random.random() < probability_B: > B.has_failed = True > > Now, I know that there is a correlation factor between the failing/not > failing of A and the failing/not failing of B. Specifically, If A > fails, then B should have 80% more chance of failing, but I have been > banging my head to find out how I should modify the "probability_B" > number (or the extremes of the uniform distribution, if that makes > sense) in order to reflect that correlation. > > I have been looking at correlated distributions, but it appears that > most of the results I have found relate to normal distributions, there > is very little about non-normal (and especially uniform) > distributions. > > It's also very likely that I am not looking in the right direction, so > I would appreciate any suggestion you may share. > easiest, I guess, is to work with a discrete distribution with 4 states, where states reflect the joint event (a, b) True, True True, False ... Then you have 3 probabilities to choose any amount of dependence, and marginal probabilities. (more complicated, correlated Probit) to generate random numbers, a recipe of Charles on the mailing list, or a new version of numpy might be helpful. Josef > > Thank you in advance. > > Andrea. > > "Imagination Is The Only Weapon In The War Against Reality." > http://xoomer.alice.it/infinity77/ > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Aug 16 17:45:32 2012 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 16 Aug 2012 14:45:32 -0700 Subject: [Numpy-discussion] Slow divide of int64? In-Reply-To: References: Message-ID: Hi, On Mon, Aug 13, 2012 at 9:49 PM, Charles R Harris wrote: > > > On Mon, Aug 13, 2012 at 10:32 PM, Charles R Harris > wrote: >> >> >> >> On Sat, Aug 11, 2012 at 6:36 PM, Matthew Brett >> wrote: >>> >>> Hi, >>> >>> A friend of mine just pointed out that dividing by int64 is >>> considerably slower than multiplying in numpy: >>> >>> >>> >>> gives (64 bit Debian Intel system, numpy trunk): >>> >>> Mul32 2.71295905113 >>> Div32 6.61985301971 >>> Mul64 2.78101611137 >>> Div64 22.8217148781 >>> >>> with similar values for numpy 1.5.1. >>> >>> Crude testing with Matlab and Octave suggests they do not seem to have >>> this same difference: >>> >>> >> divtest >>> Mul32 4.300662 >>> Div32 5.638622 >>> Mul64 7.894490 >>> Div64 18.121182 >>> >>> octave:2> divtest >>> Mul32 3.960577 >>> Div32 6.553704 >>> Mul64 7.268324 >>> Div64 13.670760 >>> >>> (files attached) >>> >>> Is there something specific about division in numpy that would cause >>> this slowdown? >>> >> >> Numpy is doing an integer divide unless you are using Python 3.x. The >> np.true_divide ufunc will speed things up a bit. I'm not sure what >> Matlab/Octave are doing for division in this case. >> > > For int64: > > In [23]: timeit multiply(a, b) > 100000 loops, best of 3: 3.31 us per loop > > In [24]: timeit true_divide(a, b) > 100000 loops, best of 3: 9.35 us per loop Thanks for looking into this. It does look like int64 division is particularly slow for the systems I'm testing on. Here's a cython c-pointer version compared to the numpy version: Numpy versions as above: Mul32 3.15036797523 Div32 6.68296504021 Mul64 4.50731801987 Div64 22.9649209976 Cython versions using pointers into contiguous array Mul32-cy 1.21214485168 Div32-cy 6.75360918045 Mul64-cy 3.98143696785 Div64-cy 31.3645660877 # Timing using double Multf-cy 4.11406683922 Divf-cy 12.603869915 (code attached). Matlab certainly returns integers from its int64 division, so I'm not sure why it does not have such an extreme slowdown for int64 division. Cheers, Matthew From vanforeest at gmail.com Thu Aug 16 18:14:52 2012 From: vanforeest at gmail.com (nicky van foreest) Date: Fri, 17 Aug 2012 00:14:52 +0200 Subject: [Numpy-discussion] Correlated distributions (?) In-Reply-To: References: Message-ID: Hi, >> once again, my apologies for a (possibly) very ignorant question, >> my google-fu is failing me... also because I am not sure of what >> exactly I should look for. >> >> My problem is relatively simple. Let's assume I have two Python >> objects, A and B, and one of their attributes can assume a value of >> "True" or "False" depending on the results of a uniform random >> distribution sample, i.e.: >> >> probability_A = 0.95 >> probability_B = 0.86 >> >> A.has_failed = False >> B.has_failed = False >> >> if numpy.random.random() < probability_A: >> A.has_failed = True >> >> if numpy.random.random() < probability_B: >> B.has_failed = True >> >> Now, I know that there is a correlation factor between the failing/not >> failing of A and the failing/not failing of B. Specifically, If A >> fails, then B should have 80% more chance of failing, but I have been >> banging my head to find out how I should modify the "probability_B" >> number (or the extremes of the uniform distribution, if that makes >> sense) in order to reflect that correlation. I don't think you actually can. You seem to want to simulate conditional events, and for that you have to take the conditioning events serious. Hence, I am inclined to solve your problem like this. if A.has_failed: if numpy.random.random() < probability_B_given_Ahasfailed: B.has_failed = True else: B.has_failed = False You have to specify the threshold probability_B_given_Ahasfailed separately. Your problem seems to resemble a Bayesian network. (The wikipedia page on this topic is not particularly revealing in my opinion BTW.) HTH Nicky >> >> I have been looking at correlated distributions, but it appears that >> most of the results I have found relate to normal distributions, there >> is very little about non-normal (and especially uniform) >> distributions. >> >> It's also very likely that I am not looking in the right direction, so >> I would appreciate any suggestion you may share. > > > easiest, I guess, is to work with a discrete distribution with 4 states, > where states reflect the joint event (a, b) > True, True > True, False > ... > > Then you have 3 probabilities to choose any amount of dependence, and > marginal probabilities. > > (more complicated, correlated Probit) > > to generate random numbers, a recipe of Charles on the mailing list, or a > new version of numpy might be helpful. > > > Josef > >> >> >> Thank you in advance. >> >> Andrea. >> >> "Imagination Is The Only Weapon In The War Against Reality." >> http://xoomer.alice.it/infinity77/ >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From flebber.crue at gmail.com Thu Aug 16 23:51:58 2012 From: flebber.crue at gmail.com (Sayth Renshaw) Date: Fri, 17 Aug 2012 13:51:58 +1000 Subject: [Numpy-discussion] 64bit infrastructure Message-ID: Hi I was just wondering if the current absence of 64bit builds was as a result of an infrastructure or funding concern. I like many now have only 64 bit systems and am using the unofficial builds. If it was something we could raise funds to resolve, how much? Or is it more complex than that? Sayth -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgomezdans at gmail.com Fri Aug 17 08:27:30 2012 From: jgomezdans at gmail.com (Jose Gomez-Dans) Date: Fri, 17 Aug 2012 13:27:30 +0100 Subject: [Numpy-discussion] Multidimensional neighbours In-Reply-To: <1345142280.10247.10.camel@sebastian-laptop> References: <1345142280.10247.10.camel@sebastian-laptop> Message-ID: Hi, On 16 August 2012 19:38, Sebastian Berg wrote: > Hello, > > Just to throw it in, if you do not mind useing scipy, you can use its > multidimensional correlate method instead: > Well, I don't need to count the number of neighbours as in the linked example. What I wanted to have is easy access to the neighbours of one cell, so neighbours = np.array( [GRID[up,:], GRID[down,:], GRID[:,up], GRID[:,down] , \ GRID[ix_(up,up)], GRID[ix_(up,down)], GRID[ix_(down,up)] , \ GRID[ix_(down,down)] ]) That gives me an array of shape (8,)+GRID.shape. neighbours.sum(axis=1) (I think) would give me the equivalent of scipy.ndimage.correlate (I think), but I don't necessarily need a sum or a mean value of the neighbours. It's just that a 3D neigbourhood is 9 +8 +9 = 26 neighbours for each pixel, and I was wondering whether there was a simpler way of writing it than what I had in my message, which is likely to have errors. Thanks! Jose -------------- next part -------------- An HTML attachment was scrubbed... URL: From nouiz at nouiz.org Fri Aug 17 09:03:45 2012 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Fri, 17 Aug 2012 09:03:45 -0400 Subject: [Numpy-discussion] Slow divide of int64? In-Reply-To: References: Message-ID: Just to be sure every body know, the hardware division is always slower then the hardware multiplication. Doing division is much more complex, so it need more circuitery and can't be pipelined. So we can't reuse part of the circuitery in parallel. So hardware division will always be slower then hardware multiplication. About matlab, this could mean they generate not optimized code(code that is not bound by the hardware division/multiplication speed). That could explain what you saw. Fred On Thu, Aug 16, 2012 at 5:45 PM, Matthew Brett wrote: > Hi, > > On Mon, Aug 13, 2012 at 9:49 PM, Charles R Harris > wrote: >> >> >> On Mon, Aug 13, 2012 at 10:32 PM, Charles R Harris >> wrote: >>> >>> >>> >>> On Sat, Aug 11, 2012 at 6:36 PM, Matthew Brett >>> wrote: >>>> >>>> Hi, >>>> >>>> A friend of mine just pointed out that dividing by int64 is >>>> considerably slower than multiplying in numpy: >>>> >>>> >>>> >>>> gives (64 bit Debian Intel system, numpy trunk): >>>> >>>> Mul32 2.71295905113 >>>> Div32 6.61985301971 >>>> Mul64 2.78101611137 >>>> Div64 22.8217148781 >>>> >>>> with similar values for numpy 1.5.1. >>>> >>>> Crude testing with Matlab and Octave suggests they do not seem to have >>>> this same difference: >>>> >>>> >> divtest >>>> Mul32 4.300662 >>>> Div32 5.638622 >>>> Mul64 7.894490 >>>> Div64 18.121182 >>>> >>>> octave:2> divtest >>>> Mul32 3.960577 >>>> Div32 6.553704 >>>> Mul64 7.268324 >>>> Div64 13.670760 >>>> >>>> (files attached) >>>> >>>> Is there something specific about division in numpy that would cause >>>> this slowdown? >>>> >>> >>> Numpy is doing an integer divide unless you are using Python 3.x. The >>> np.true_divide ufunc will speed things up a bit. I'm not sure what >>> Matlab/Octave are doing for division in this case. >>> >> >> For int64: >> >> In [23]: timeit multiply(a, b) >> 100000 loops, best of 3: 3.31 us per loop >> >> In [24]: timeit true_divide(a, b) >> 100000 loops, best of 3: 9.35 us per loop > > Thanks for looking into this. It does look like int64 division is > particularly slow for the systems I'm testing on. Here's a cython > c-pointer version compared to the numpy version: > > Numpy versions as above: > > Mul32 3.15036797523 > Div32 6.68296504021 > Mul64 4.50731801987 > Div64 22.9649209976 > > Cython versions using pointers into contiguous array > > Mul32-cy 1.21214485168 > Div32-cy 6.75360918045 > Mul64-cy 3.98143696785 > Div64-cy 31.3645660877 > > # Timing using double > Multf-cy 4.11406683922 > Divf-cy 12.603869915 > > (code attached). > > Matlab certainly returns integers from its int64 division, so I'm not > sure why it does not have such an extreme slowdown for int64 division. > > Cheers, > > Matthew > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From ondrej.certik at gmail.com Fri Aug 17 09:49:21 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Fri, 17 Aug 2012 06:49:21 -0700 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: Message-ID: Hi Sayth, On Thu, Aug 16, 2012 at 8:51 PM, Sayth Renshaw wrote: > Hi > > I was just wondering if the current absence of 64bit builds was as a result > of an infrastructure or funding concern. > > I like many now have only 64 bit systems and am using the unofficial builds. > > If it was something we could raise funds to resolve, how much? Or is it more > complex than that? Which platform, windows? Ondrej From lee.mcculler at gmail.com Fri Aug 17 11:07:33 2012 From: lee.mcculler at gmail.com (Lee McCuller) Date: Fri, 17 Aug 2012 10:07:33 -0500 Subject: [Numpy-discussion] Enhanced array destruction needed for C flexible numeric type Message-ID: I've been implementing a native C++ code with all of the functionality of the Python Uncertanties package (http://packages.python.org/uncertainties/). Automatic error propagation (and differentiation) is extremely useful, but native speed is needed to use in things like function fitting. I have created the type and successfully bound it to numpy (though still missing a lot of UFuncs), but I have a hangup in that I have no clean way to call a destructor on the numeric type (it is variable length via a held pointer). I could keep it in the array as a python object and get xdecref to do it, but I would much rather A: not have the overhead and B: I really want to keep the GIL free. All of the other PyArray_ArrFuncs which manage the type appear to allow pairing a destructor when memory is overwritten, and constructors are all well supported (like copyswap, setitem and fillwithscalar). I'd like to propose adding a flag for PyArray_Descr.flags NPY_ITEM_USES_DESTRUCTOR and a function PyArray_ArrFuncs.destructor with void destruct(void *data, npy_intp dstride, npy_intp len, void *arr) where data points into the data array to be affected, dstride is the data stride, len is number of affected elements, and arr is the array object containing data. It may be possible to change the call structure of another element when this flag is set (to preserve ABI compatibility for existing types). This flag would by inherited when NPY_FROM_FIELDS is set. Subclassing PyArray_Descr may also allow preservation of the API. The current implementation of NPY_ITEM_REFCOUNT could potentially use this more-general destructor implementation to call xdecref. I could potentialy implement this in a branch, but I'd like to know what other's think of it as well as how best to do it to minimize its effect on the rest of the code and external libraries. For anyone interested in the details of this uncertainties code - My error propagation class are a C++ numeric type, which carry around a list of partial derivatives between new values and all of the values used to compute them. This list is a C++ vector which needs its destructor called. My bindings are hand-written, but should provide a good template for binding to other C++ numeric types. For instance, it would not be hard to extend them also add numpy support for arbitrary precision math types, which would also need their C++ destructors called. -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Fri Aug 17 17:09:12 2012 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Fri, 17 Aug 2012 22:09:12 +0100 Subject: [Numpy-discussion] Correlated distributions (?) In-Reply-To: References: Message-ID: On Thu, Aug 16, 2012 at 11:14 PM, nicky van foreest wrote: > Your problem seems to resemble a Bayesian network. (The wikipedia page > on this topic is not particularly revealing in my opinion BTW.) > The courses of Sebastian Thrun on Artificial Intelligence both at Stanford or at Udacity are freely available, and they both have a great deal of explanations about this topic with examples. The ones at Udacity include also Python programming exercises. -------------- next part -------------- An HTML attachment was scrubbed... URL: From flebber.crue at gmail.com Fri Aug 17 17:25:54 2012 From: flebber.crue at gmail.com (Sayth Renshaw) Date: Sat, 18 Aug 2012 07:25:54 +1000 Subject: [Numpy-discussion] 64bit infrastructure Message-ID: > > Hi Sayth, > > On Thu, Aug 16, 2012 at 8:51 PM, Sayth Renshaw wrote: >> Hi >> >> I was just wondering if the current absence of 64bit builds was as a result >> of an infrastructure or funding concern. >> >> I like many now have only 64 bit systems and am using the unofficial builds. >> >> If it was something we could raise funds to resolve, how much? Or is it more >> complex than that? > > Which platform, windows? > > Ondrej > > Yes I have a Windows 7 64Bit install and a ubuntu 64 Bit install. Sayth From travis at continuum.io Fri Aug 17 17:44:21 2012 From: travis at continuum.io (Travis Oliphant) Date: Fri, 17 Aug 2012 16:44:21 -0500 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: Message-ID: Donations to NumFOCUS would be helpful in raising money to fund the creation of 64-bit installers. It always comes down to funding, but mostly people's time is the scarce resource here. It is more difficult to build 64-bit binaries on Windows which is the real issue (not the availability of 64-bit Windows machines -- although the right person having access to those machines can be a problem). NumFOCUS would gladly donate long-running EC2 instances running Windows 64-bit windows if there are people who just need access to a machine. -Travis On Aug 16, 2012, at 10:51 PM, Sayth Renshaw wrote: > Hi > > I was just wondering if the current absence of 64bit builds was as a result of an infrastructure or funding concern. > > I like many now have only 64 bit systems and am using the unofficial builds. > > If it was something we could raise funds to resolve, how much? Or is it more complex than that? > > Sayth > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From matthew.brett at gmail.com Fri Aug 17 18:25:21 2012 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 17 Aug 2012 15:25:21 -0700 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: Message-ID: Hi, On Fri, Aug 17, 2012 at 2:25 PM, Sayth Renshaw wrote: >> >> Hi Sayth, >> >> On Thu, Aug 16, 2012 at 8:51 PM, Sayth Renshaw wrote: >>> Hi >>> >>> I was just wondering if the current absence of 64bit builds was as a result >>> of an infrastructure or funding concern. >>> >>> I like many now have only 64 bit systems and am using the unofficial builds. >>> >>> If it was something we could raise funds to resolve, how much? Or is it more >>> complex than that? >> >> Which platform, windows? >> >> Ondrej >> >> > > Yes I have a Windows 7 64Bit install and a ubuntu 64 Bit install. Isn't the numpy ubuntu install 64 bit by default? You may have seen that there are numpy and scipy 64-bit windows installers here: http://www.lfd.uci.edu/~gohlke/pythonlibs/ Do they work for you? Best, Matthew From cournape at gmail.com Sat Aug 18 09:28:38 2012 From: cournape at gmail.com (David Cournapeau) Date: Sat, 18 Aug 2012 14:28:38 +0100 Subject: [Numpy-discussion] Preventing lossy cast for new float dtypes ? Message-ID: Hi, I have started toying with implementing a quad precision dtype for numpy on supported platforms, using the __float128 + quadmath lib from gcc. I have noticed invalid (and unexpected) downcast to long double in some cases, especially for ufuncs (e.g. when I don't define my own ufunc for a given operation). Looking down in numpy ufunc machinery, I can see that the issue is coming from the assumption that long double is the highest precision possible for a float type, and the only way I can 'fix' this is to define kind to a value != 'f' in my dtype definition (in which case I get an expected invalid cast exception). Is there a way to still avoid those casts while keeping the 'f' kind ? thanks, David From flebber.crue at gmail.com Sun Aug 19 03:14:41 2012 From: flebber.crue at gmail.com (Sayth Renshaw) Date: Sun, 19 Aug 2012 17:14:41 +1000 Subject: [Numpy-discussion] 64bit infrastructure Message-ID: > Message: 4 > Date: Fri, 17 Aug 2012 15:25:21 -0700 > From: Matthew Brett > Subject: Re: [Numpy-discussion] 64bit infrastructure > To: Discussion of Numerical Python > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > Hi, > > On Fri, Aug 17, 2012 at 2:25 PM, Sayth Renshaw wrote: >>> >>> Hi Sayth, >>> >>> On Thu, Aug 16, 2012 at 8:51 PM, Sayth Renshaw wrote: >>>> Hi >>>> >>>> I was just wondering if the current absence of 64bit builds was as a result >>>> of an infrastructure or funding concern. >>>> >>>> I like many now have only 64 bit systems and am using the unofficial builds. >>>> >>>> If it was something we could raise funds to resolve, how much? Or is it more >>>> complex than that? >>> >>> Which platform, windows? >>> >>> Ondrej >>> >>> >> >> Yes I have a Windows 7 64Bit install and a ubuntu 64 Bit install. > > Isn't the numpy ubuntu install 64 bit by default? > > You may have seen that there are numpy and scipy 64-bit windows installers here: > > http://www.lfd.uci.edu/~gohlke/pythonlibs/ > > Do they work for you? > > Best, > > Matthew > > > ------------------------------ > Thanks Matthew yeah that's the one I am currently using and it is working ok but just thought if there was a way to help 64bit builds become official then I would ask. Sayth From ralf.gommers at gmail.com Sun Aug 19 14:39:28 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 19 Aug 2012 20:39:28 +0200 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: Message-ID: On Sun, Aug 19, 2012 at 9:14 AM, Sayth Renshaw wrote: > > Message: 4 > > Date: Fri, 17 Aug 2012 15:25:21 -0700 > > From: Matthew Brett > > Subject: Re: [Numpy-discussion] 64bit infrastructure > > To: Discussion of Numerical Python > > Message-ID: > > Wdw at mail.gmail.com> > > Content-Type: text/plain; charset=ISO-8859-1 > > > > Hi, > > > > On Fri, Aug 17, 2012 at 2:25 PM, Sayth Renshaw > wrote: > >>> > >>> Hi Sayth, > >>> > >>> On Thu, Aug 16, 2012 at 8:51 PM, Sayth Renshaw > wrote: > >>>> Hi > >>>> > >>>> I was just wondering if the current absence of 64bit builds was as a > result > >>>> of an infrastructure or funding concern. > >>>> > >>>> I like many now have only 64 bit systems and am using the unofficial > builds. > >>>> > >>>> If it was something we could raise funds to resolve, how much? Or is > it more > >>>> complex than that? > >>> > >>> Which platform, windows? > >>> > >>> Ondrej > >>> > >>> > >> > >> Yes I have a Windows 7 64Bit install and a ubuntu 64 Bit install. > > > > Isn't the numpy ubuntu install 64 bit by default? > > > > You may have seen that there are numpy and scipy 64-bit windows > installers here: > > > > http://www.lfd.uci.edu/~gohlke/pythonlibs/ > > > > Do they work for you? > > > > Best, > > > > Matthew > > > > > > ------------------------------ > > > > Thanks Matthew > > yeah that's the one I am currently using and it is working ok but just > thought if there was a way to help 64bit builds become official then I > would ask. > The problem is that, unlike 32-bit builds, they can't be made with open source compilers on Windows. So unless we're okay with that, we stay with the current situation. Fortunately Christoph Golhke's unofficial binaries are excellent. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Sun Aug 19 18:43:08 2012 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Mon, 20 Aug 2012 00:43:08 +0200 Subject: [Numpy-discussion] Functions for stride manipulations Message-ID: <1345416188.21345.40.camel@sebastian-laptop> Hey, Inspired by an existing PR into numpy, I created two functions based on stride_tricks which I thought might be useful inside numpy. So if I get any feedback/pointers, I would add some tests and create a PR. The first function rolling_window is to create for every point of the original array, a view of its neighbourhood in arbitrary dimensions with some options to make it easy to use. For example for a 3 dim array: rolling_window(a, (3,3,3), asteps=(3,3,3)).max((3,4,5)) Gives the maximum for all non-overlapping 3x3x3 subarrays and: rolling_window(a, 3, axes=0).max(-1) would create the rolling maximum over all 3 successive values along the 0th axis. Plus a function permute_axes which allows to give a tuple for switching axes and can also merge them. A (10, 4, 3) shaped array would give: permute_axes(a, (2, 1, 0)).shape -> (3, 4, 10) permute_axes(a, (1, (0, 2))).shape -> (4, 30) A copy is created when necessary, this basically allows to do multiple swap_axes and a reshape in a single call. I hope this might be useful... Regards, Sebastian -------------- next part -------------- A non-text attachment was scrubbed... Name: stride_tricks.py Type: text/x-python Size: 10096 bytes Desc: not available URL: From andyfaff at gmail.com Mon Aug 20 06:55:21 2012 From: andyfaff at gmail.com (Andrew Nelson) Date: Mon, 20 Aug 2012 20:55:21 +1000 Subject: [Numpy-discussion] Difference between np.loadtxt depending on whether you supply a file object or a filename Message-ID: Dear list, I observe a difference when I try to load a 2D numpy array from a file object compared to if I supply a filename viz: >>> np.version.version '1.5.1' >>> f=open('fit_theoretical.txt') >>> a=np.loadtxt(f) >>> a.shape (1000,) >>> a=np.loadtxt('fit_theoretical.txt') >>> a.shape (500, 2) This strikes me as unexpected, it's not a documented behaviour. Any ideas? cheers, Andrew. -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Mon Aug 20 07:14:08 2012 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Mon, 20 Aug 2012 13:14:08 +0200 Subject: [Numpy-discussion] Difference between np.loadtxt depending on whether you supply a file object or a filename In-Reply-To: References: Message-ID: <1345461248.24996.2.camel@sebastian-laptop> Hello, On Mo, 2012-08-20 at 20:55 +1000, Andrew Nelson wrote: > Dear list, > I observe a difference when I try to load a 2D numpy array from a file > object compared to if I supply a filename viz: > > > >>> np.version.version > '1.5.1' > >>> f=open('fit_theoretical.txt') > >>> a=np.loadtxt(f) > >>> a.shape > (1000,) > >>> a=np.loadtxt('fit_theoretical.txt') > >>> a.shape > (500, 2) There is actually a difference between the two, because np.loadtxt opens the file with open(..., 'U'), which means that newlines might be interpreted differently (because of difference between linux/windows/mac newlines using \n, \r, etc. So the problem is that newlines are interpreted wrong for you if you open with just the default mode. Regards, Sebastian > > This strikes me as unexpected, it's not a documented behaviour. Any > ideas? > > > cheers, > Andrew. > > > > > -- > _____________________________________ > Dr. Andrew Nelson > > > _____________________________________ > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From nouiz at nouiz.org Mon Aug 20 09:42:54 2012 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Mon, 20 Aug 2012 09:42:54 -0400 Subject: [Numpy-discussion] Preventing lossy cast for new float dtypes ? In-Reply-To: References: Message-ID: On Sat, Aug 18, 2012 at 9:28 AM, David Cournapeau wrote: > Hi, > > I have started toying with implementing a quad precision dtype for > numpy on supported platforms, using the __float128 + quadmath lib from > gcc. I have noticed invalid (and unexpected) downcast to long double > in some cases, especially for ufuncs (e.g. when I don't define my own > ufunc for a given operation). > > Looking down in numpy ufunc machinery, I can see that the issue is > coming from the assumption that long double is the highest precision > possible for a float type, and the only way I can 'fix' this is to > define kind to a value != 'f' in my dtype definition (in which case I > get an expected invalid cast exception). Is there a way to still avoid > those casts while keeping the 'f' kind ? I never looked at that code, but why not change the ufunc to remove the current assumption? I suppose if you ask the questions is that this is not trivial to do? Fred From ondrej.certik at gmail.com Mon Aug 20 15:40:10 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Mon, 20 Aug 2012 12:40:10 -0700 Subject: [Numpy-discussion] view of recarray issue In-Reply-To: References: Message-ID: On Wed, Jul 25, 2012 at 10:29 AM, Jay Bourque wrote: > I'm actively looking at this issue since it was my pull request that broke > this (https://github.com/numpy/numpy/pull/350). We definitely don't want to > break this functionality for 1.7. The problem is that even though indexing > with a subset of fields still returns a copy (for now), it now returns a > copy of a view of the original array. When you call copy() on a view, it > copies the entire original structured array with the view dtype. A short > term fix would be to "manually" create a proper copy to return similar to > what _index_fields() did before my change, but since the idea is to > eventually return the view instead of a copy, long term we need a way to do > a proper copy of a structured array view that doesn't copy the unwanted > fields. This should be fixed for 1.7.0. However, I am going to release beta now, and then see what we can do about this. Ondrej From ralf.gommers at gmail.com Mon Aug 20 16:04:13 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 20 Aug 2012 22:04:13 +0200 Subject: [Numpy-discussion] A step toward merging odeint and ode In-Reply-To: <1345114002.3855.5.camel@amilo.coursju> References: <1344768110.26417.6.camel@amilo.coursju> <1345020138.19573.11.camel@amilo.coursju> <1345114002.3855.5.camel@amilo.coursju> Message-ID: On Thu, Aug 16, 2012 at 12:46 PM, Fabrice Silva wrote: > > > Le mercredi 15 ao?t 2012 ? 20:54 +0200, Ralf Gommers a ?crit : > > I was mixing it up a bit, but yes: the _odepack extension and the C > > source for it. Not necessary to do that at once I guess, but wrapping > > the same function twice is once too many. > > > > And forgot in my first email: nice PR, looks good to me. > > OK then, you can found two commits : > > the first one removes the _odepack extension (and the relative > multipack.h, __odepack.h and _odepackmodule.c), replacing it by Python > counterparts in the odeint function itself. > > https://github.com/FabricioS/scipy/commit/02e8a4856f29f4ad438fef2c86a41b266d6a9e6c > > Thanks. > the second one suggests reverting callback arguments convention: > ydot = f(y,t,..) > to ode's one: > ydot = f(t,y,..) > This ones would raise backward compatibility issues but align ordering > to the convention defined in the LLNL when designing the ODEPACK. > > https://github.com/FabricioS/scipy/commit/f867f2b8133d3f6ea47d449bd760a77a7c90394e > > This is probably not worth the cost for existing users imho. It is a backwards compatibility break that doesn't really add anything except for some consistency (right?). Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Aug 20 18:28:00 2012 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 20 Aug 2012 15:28:00 -0700 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: Message-ID: On Sun, Aug 19, 2012 at 11:39 AM, Ralf Gommers > The problem is that, unlike 32-bit builds, they can't be made with open > source compilers on Windows. So unless we're okay with that, Why does it have to be built with open-source compilers? we're building against the python.org python, yes? Which is built with the MS compiler -- so why the insistance on open-source? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From travis at continuum.io Mon Aug 20 18:51:37 2012 From: travis at continuum.io (Travis Oliphant) Date: Mon, 20 Aug 2012 17:51:37 -0500 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: Message-ID: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> I'm actually not sure, why. I think the issue is making sure that the release manager can actually "build" NumPy without having to buy a particular compiler. But, I would rather have official builds of NumPy for all platforms with a compiler paid for by a NumPy-sponsor than not have them. -Travis On Aug 20, 2012, at 5:28 PM, Chris Barker wrote: > On Sun, Aug 19, 2012 at 11:39 AM, Ralf Gommers > The problem is that, > unlike 32-bit builds, they can't be made with open >> source compilers on Windows. So unless we're okay with that, > > Why does it have to be built with open-source compilers? we're > building against the python.org python, yes? Which is built with the > MS compiler -- so why the insistance on open-source? > > -Chris > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From chris.barker at noaa.gov Mon Aug 20 19:15:21 2012 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 20 Aug 2012 16:15:21 -0700 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> References: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> Message-ID: On Mon, Aug 20, 2012 at 3:51 PM, Travis Oliphant wrote: > I'm actually not sure, why. I think the issue is making sure that the release manager can actually "build" NumPy without having to buy a particular compiler. The MS Express editions, while not open source, are free-to-use, and work fine. Not sure what what do about Fortran, though, but that's a scipy, not a numpy issue, yes? -Chris > But, I would rather have official builds of NumPy for all platforms with a compiler paid for by a NumPy-sponsor than not have them. > > -Travis > > > > On Aug 20, 2012, at 5:28 PM, Chris Barker wrote: > >> On Sun, Aug 19, 2012 at 11:39 AM, Ralf Gommers > The problem is that, >> unlike 32-bit builds, they can't be made with open >>> source compilers on Windows. So unless we're okay with that, >> >> Why does it have to be built with open-source compilers? we're >> building against the python.org python, yes? Which is built with the >> MS compiler -- so why the insistance on open-source? >> >> -Chris >> >> >> >> -- >> >> Christopher Barker, Ph.D. >> Oceanographer >> >> Emergency Response Division >> NOAA/NOS/OR&R (206) 526-6959 voice >> 7600 Sand Point Way NE (206) 526-6329 fax >> Seattle, WA 98115 (206) 526-6317 main reception >> >> Chris.Barker at noaa.gov >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From silva at lma.cnrs-mrs.fr Tue Aug 21 04:22:13 2012 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Tue, 21 Aug 2012 10:22:13 +0200 Subject: [Numpy-discussion] Adding solvers to scipy.integrate [Was: A step toward merging odeint and ode] In-Reply-To: References: <1344768110.26417.6.camel@amilo.coursju> <1345020138.19573.11.camel@amilo.coursju> <1345114002.3855.5.camel@amilo.coursju> Message-ID: <1345537333.15902.7.camel@amilo.coursju> Le lundi 20 ao?t 2012 ? 22:04 +0200, Ralf Gommers a ?crit : > https://github.com/FabricioS/scipy/commit/f867f2b8133d3f6ea47d449bd760a77a7c90394e > This is probably not worth the cost for existing users imho. It is a > backwards compatibility break that doesn't really add anything except > for some consistency (right?). Hi Ralf, Ok concerning this point. In addition, I have been looking to suggest additional solvers, essentially simpler scheme, that would thus allow to easily switch between "complex" (lsode, vode, cvode) and basic schemes (Euler, Nicholson, etc...) I came across some code on the Montana Univ.'s Computer Science dpt: http://wiki.cs.umt.edu/classes/cs477/index.php/Creating_ODE_Solver_Objects and asked Jesse Johnson (the responsible for that class) what is the license for that code. Here is his answer : Any thing that you find on those pages, you may use. However, I'm not sure how to go about officially giving the code a particular license. Can I add a license to the wiki, stating that it applies to all the code therein? PS It is fantastic you're doing this. I've often thought that scipy.ode could use some improvements. He is cc'ed of this mail, could anyone concerned about scipy license requirements and more generally in code licensing answer him ? Regards -- Fabrice Silva From ondrej.certik at gmail.com Tue Aug 21 12:24:21 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Tue, 21 Aug 2012 09:24:21 -0700 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release Message-ID: Hi, I'm pleased to announce the availability of the first beta release of NumPy 1.7.0b1. Sources and binary installers can be found at https://sourceforge.net/projects/numpy/files/NumPy/1.7.0b1/ Please test this release and report any issues on the numpy-discussion mailing list. The following problems are known and we'll work on fixing them before the final release: http://projects.scipy.org/numpy/ticket/2187 http://projects.scipy.org/numpy/ticket/2185 http://projects.scipy.org/numpy/ticket/2066 http://projects.scipy.org/numpy/ticket/1588 http://projects.scipy.org/numpy/ticket/2076 http://projects.scipy.org/numpy/ticket/2101 http://projects.scipy.org/numpy/ticket/2108 http://projects.scipy.org/numpy/ticket/2150 http://projects.scipy.org/numpy/ticket/2189 I would like to thank Ralf for a lot of help with creating binaries and other help for this release. Cheers, Ondrej ========================= NumPy 1.7.0 Release Notes ========================= This release includes several new features as well as numerous bug fixes and refactorings. It supports Python 2.4 - 2.7 and 3.1 - 3.2. Highlights ========== * ``where=`` parameter to ufuncs (allows the use of boolean arrays to choose where a computation should be done) * ``vectorize`` improvements (added 'excluded' and 'cache' keyword, general cleanup and bug fixes) * ``numpy.random.choice`` (random sample generating function) Compatibility notes =================== In a future version of numpy, the functions np.diag, np.diagonal, and the diagonal method of ndarrays will return a view onto the original array, instead of producing a copy as they do now. This makes a difference if you write to the array returned by any of these functions. To facilitate this transition, numpy 1.7 produces a FutureWarning if it detects that you may be attempting to write to such an array. See the documentation for np.diagonal for details. Similar to np.diagonal above, in a future version of numpy, indexing a record array by a list of field names will return a view onto the original array, instead of producing a copy as they do now. As with np.diagonal, numpy 1.7 produces a FutureWarning if it detects that you may be attemping to write to such an array. See the documentation for array indexing for details. The default casting rule for UFunc out= parameters has been changed from 'unsafe' to 'same_kind'. Most usages which violate the 'same_kind' rule are likely bugs, so this change may expose previously undetected errors in projects that depend on NumPy. Full-array boolean indexing has been optimized to use a different, optimized code path. This code path should produce the same results, but any feedback about changes to your code would be appreciated. Attempting to write to a read-only array (one with ``arr.flags.writeable`` set to ``False``) used to raise either a RuntimeError, ValueError, or TypeError inconsistently, depending on which code path was taken. It now consistently raises a ValueError. The .reduce functions evaluate some reductions in a different order than in previous versions of NumPy, generally providing higher performance. Because of the nature of floating-point arithmetic, this may subtly change some results, just as linking NumPy to a different BLAS implementations such as MKL can. If upgrading from 1.5, then generally in 1.6 and 1.7 there have been substantial code added and some code paths altered, particularly in the areas of type resolution and buffered iteration over universal functions. This might have an impact on your code particularly if you relied on accidental behavior in the past. New features ============ Reduction UFuncs Generalize axis= Parameter ------------------------------------------- Any ufunc.reduce function call, as well as other reductions like sum, prod, any, all, max and min support the ability to choose a subset of the axes to reduce over. Previously, one could say axis=None to mean all the axes or axis=# to pick a single axis. Now, one can also say axis=(#,#) to pick a list of axes for reduction. Reduction UFuncs New keepdims= Parameter ---------------------------------------- There is a new keepdims= parameter, which if set to True, doesn't throw away the reduction axes but instead sets them to have size one. When this option is set, the reduction result will broadcast correctly to the original operand which was reduced. Datetime support ---------------- .. note:: The datetime API is *experimental* in 1.7.0, and may undergo changes in future versions of NumPy. There have been a lot of fixes and enhancements to datetime64 compared to NumPy 1.6: * the parser is quite strict about only accepting ISO 8601 dates, with a few convenience extensions * converts between units correctly * datetime arithmetic works correctly * business day functionality (allows the datetime to be used in contexts where only certain days of the week are valid) The notes in `doc/source/reference/arrays.datetime.rst `_ (also available in the online docs at `arrays.datetime.html `_) should be consulted for more details. Custom formatter for printing arrays ------------------------------------ See the new ``formatter`` parameter of the ``numpy.set_printoptions`` function. New function numpy.random.choice --------------------------------- A generic sampling function has been added which will generate samples from a given array-like. The samples can be with or without replacement, and with uniform or given non-uniform probabilities. New function isclose -------------------- Returns a boolean array where two arrays are element-wise equal within a tolerance. Both relative and absolute tolerance can be specified. The function is NA aware. Preliminary multi-dimensional support in the polynomial package --------------------------------------------------------------- Axis keywords have been added to the integration and differentiation functions and a tensor keyword was added to the evaluation functions. These additions allow multi-dimensional coefficient arrays to be used in those functions. New functions for evaluating 2-D and 3-D coefficient arrays on grids or sets of points were added together with 2-D and 3-D pseudo-Vandermonde matrices that can be used for fitting. Ability to pad rank-n arrays ---------------------------- A pad module containing functions for padding n-dimensional arrays has been added. The various private padding functions are exposed as options to a public 'pad' function. Example:: pad(a, 5, mode='mean') Current modes are ``constant``, ``edge``, ``linear_ramp``, ``maximum``, ``mean``, ``median``, ``minimum``, ``reflect``, ``symmetric``, ``wrap``, and ````. New argument to searchsorted ---------------------------- The function searchsorted now accepts a 'sorter' argument that is a permuation array that sorts the array to search. C API ----- New function ``PyArray_RequireWriteable`` provides a consistent interface for checking array writeability -- any C code which works with arrays whose WRITEABLE flag is not known to be True a priori, should make sure to call this function before writing. NumPy C Style Guide added (``doc/C_STYLE_GUIDE.rst.txt``). Changes ======= General ------- The function np.concatenate tries to match the layout of its input arrays. Previously, the layout did not follow any particular reason, and depended in an undesirable way on the particular axis chosen for concatenation. A bug was also fixed which silently allowed out of bounds axis arguments. The ufuncs logical_or, logical_and, and logical_not now follow Python's behavior with object arrays, instead of trying to call methods on the objects. For example the expression (3 and 'test') produces the string 'test', and now np.logical_and(np.array(3, 'O'), np.array('test', 'O')) produces 'test' as well. Deprecations ============ General ------- Specifying a custom string formatter with a `_format` array attribute is deprecated. The new `formatter` keyword in ``numpy.set_printoptions`` or ``numpy.array2string`` can be used instead. The deprecated imports in the polynomial package have been removed. C-API ----- Direct access to the fields of PyArrayObject* has been deprecated. Direct access has been recommended against for many releases. Expect similar deprecations for PyArray_Descr* and other core objects in the future as preparation for NumPy 2.0. The macros in old_defines.h are deprecated and will be removed in the next major release (>= 2.0). The sed script tools/replace_old_macros.sed can be used to replace these macros with the newer versions. You can test your code against the deprecated C API by #defining NPY_NO_DEPRECATED_API to the target version number, for example NPY_1_7_API_VERSION, before including any NumPy headers. From francesc at continuum.io Tue Aug 21 14:32:16 2012 From: francesc at continuum.io (Francesc Alted) Date: Tue, 21 Aug 2012 20:32:16 +0200 Subject: [Numpy-discussion] [ANN] carray 0.5 released Message-ID: <5033D430.80107@continuum.io> Announcing carray 0.5 ===================== What's new ---------- carray 0.5 supports completely transparent storage on-disk in addition to memory. That means that everything that can be done with an in-memory container can be done using the disk instead. The advantages of a disk-based container is that your addressable space is much larger than just your available memory. Also, as carray is based on a chunked and compressed data layout based on the super-fast Blosc compression library, and the different cache levels existing in both modern operating systems and the internal carray machinery, the data access speed is very good. The format chosen for the persistence layer is based on the 'bloscpack' library (thanks to Valentin Haenel for his inspiration) and described in 'persistence.rst', although not everything has been implemented yet. You may want to contribute by proposing enhancements to it. See: https://github.com/FrancescAlted/carray/wiki/PersistenceProposal CAVEAT: The bloscpack format is still evolving, so don't trust on forward compatibility of the format, at least until 1.0, where the internal format will be declared frozen. For more detailed info, see the release notes in: https://github.com/FrancescAlted/carray/wiki/Release-0.5 What it is ---------- carray is a chunked container for numerical data. Chunking allows for efficient enlarging/shrinking of data container. In addition, it can also be compressed for reducing memory/disk needs. The compression process is carried out internally by Blosc, a high-performance compressor that is optimized for binary data. carray can use numexpr internally so as to accelerate many vector and query operations (although it can use pure NumPy for doing so too). numexpr can use optimize the memory usage and use several cores for doing the computations, so it is blazing fast. Moreover, with the introduction of a carray/ctable disk-based container (in version 0.5), it can be used for seamlessly performing out-of-core computations. carray comes with an exhaustive test suite and fully supports both 32-bit and 64-bit platforms. Also, it is typically tested on both UNIX and Windows operating systems. Resources --------- Visit the main carray site repository at: http://github.com/FrancescAlted/carray You can download a source package from: http://carray.pytables.org/download Manual: http://carray.pytables.org/docs/manual Home of Blosc compressor: http://blosc.pytables.org User's mail list: carray at googlegroups.com http://groups.google.com/group/carray ---- Enjoy! -- Francesc Alted From cgohlke at uci.edu Tue Aug 21 15:59:02 2012 From: cgohlke at uci.edu (Christoph Gohlke) Date: Tue, 21 Aug 2012 12:59:02 -0700 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: References: Message-ID: <5033E886.2050400@uci.edu> On 8/21/2012 9:24 AM, Ond?ej ?ert?k wrote: > Hi, > > I'm pleased to announce the availability of the first beta release of > NumPy 1.7.0b1. > > Sources and binary installers can be found at > https://sourceforge.net/projects/numpy/files/NumPy/1.7.0b1/ > > Please test this release and report any issues on the numpy-discussion > mailing list. The following problems are known and > we'll work on fixing them before the final release: > > http://projects.scipy.org/numpy/ticket/2187 > http://projects.scipy.org/numpy/ticket/2185 > http://projects.scipy.org/numpy/ticket/2066 > http://projects.scipy.org/numpy/ticket/1588 > http://projects.scipy.org/numpy/ticket/2076 > http://projects.scipy.org/numpy/ticket/2101 > http://projects.scipy.org/numpy/ticket/2108 > http://projects.scipy.org/numpy/ticket/2150 > http://projects.scipy.org/numpy/ticket/2189 > > I would like to thank Ralf for a lot of help with creating binaries > and other help for this release. > > Cheers, > Ondrej > > Hi Ondrej, will numpy 1.7.0 final support Python 3.3? The recent patch in the master branch seems to work well. I tested a win-amd64-py2.7\msvc9\MKL build of the numpy maintenance/1.7.x branch against a number of package binaries from . The test results are at . For comparison, the tests against numpy-MKL-1.6.2 are at . Besides some numpy 1.7.x test errors due to RuntimeWarning and DeprecationWarning, there are hundreds of "RuntimeWarning (numpy.dtype size changed, may indicate binary incompatibility)" when loading Cython extensions. There are additional test failures in scipy, statsmodels, bottleneck, skimage, vigra, and mahotas. I did not check in detail or with existing tickets (http://projects.scipy.org/ is timing out or responding with HTTP 500 status). Other packages test OK against numpy 1.7.x, e.g. PIL, PyGame, matplotlib, Pandas, tables, and numexpr. Hope it helps. Christoph From jsseabold at gmail.com Tue Aug 21 16:06:13 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 21 Aug 2012 16:06:13 -0400 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: <5033E886.2050400@uci.edu> References: <5033E886.2050400@uci.edu> Message-ID: On Tue, Aug 21, 2012 at 3:59 PM, Christoph Gohlke wrote: > On 8/21/2012 9:24 AM, Ond?ej ?ert?k wrote: > > Hi, > > > > I'm pleased to announce the availability of the first beta release of > > NumPy 1.7.0b1. > > > > Sources and binary installers can be found at > > https://sourceforge.net/projects/numpy/files/NumPy/1.7.0b1/ > > > > Please test this release and report any issues on the numpy-discussion > > mailing list. The following problems are known and > > we'll work on fixing them before the final release: > > > > http://projects.scipy.org/numpy/ticket/2187 > > http://projects.scipy.org/numpy/ticket/2185 > > http://projects.scipy.org/numpy/ticket/2066 > > http://projects.scipy.org/numpy/ticket/1588 > > http://projects.scipy.org/numpy/ticket/2076 > > http://projects.scipy.org/numpy/ticket/2101 > > http://projects.scipy.org/numpy/ticket/2108 > > http://projects.scipy.org/numpy/ticket/2150 > > http://projects.scipy.org/numpy/ticket/2189 > > > > I would like to thank Ralf for a lot of help with creating binaries > > and other help for this release. > > > > Cheers, > > Ondrej > > > > > > Hi Ondrej, > > will numpy 1.7.0 final support Python 3.3? The recent patch in the > master branch seems to work well. > > I tested a win-amd64-py2.7\msvc9\MKL build of the numpy > maintenance/1.7.x branch against a number of package binaries from > . > > The test results are at > < > http://www.lfd.uci.edu/~gohlke/pythonlibs/tests/20120821-win-amd64-py2.7-numpy-MKL-1.7.0rc1.dev-28ffac7/ > >. > For comparison, the tests against numpy-MKL-1.6.2 are at > >. > > Besides some numpy 1.7.x test errors due to RuntimeWarning and > DeprecationWarning, there are hundreds of "RuntimeWarning (numpy.dtype > size changed, may indicate binary incompatibility)" when loading Cython > extensions. > > There are additional test failures in scipy, statsmodels, bottleneck, > skimage, vigra, and mahotas. I did not check in detail or with existing > tickets (http://projects.scipy.org/ is timing out or responding with > HTTP 500 status). > Most (all?) of the statsmodels issues are due to structured / record array view changes discussed in the thread "view of recarray issue." I.e., rec_array.view((float, 3)) no longer works, though I thought this was fixed. > > Other packages test OK against numpy 1.7.x, e.g. PIL, PyGame, > matplotlib, Pandas, tables, and numexpr. > > Hope it helps. > > Christoph > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Aug 21 16:06:57 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 21 Aug 2012 16:06:57 -0400 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: <5033E886.2050400@uci.edu> References: <5033E886.2050400@uci.edu> Message-ID: On Tue, Aug 21, 2012 at 3:59 PM, Christoph Gohlke wrote: > On 8/21/2012 9:24 AM, Ond?ej ?ert?k wrote: >> Hi, >> >> I'm pleased to announce the availability of the first beta release of >> NumPy 1.7.0b1. >> >> Sources and binary installers can be found at >> https://sourceforge.net/projects/numpy/files/NumPy/1.7.0b1/ >> >> Please test this release and report any issues on the numpy-discussion >> mailing list. The following problems are known and >> we'll work on fixing them before the final release: >> >> http://projects.scipy.org/numpy/ticket/2187 >> http://projects.scipy.org/numpy/ticket/2185 >> http://projects.scipy.org/numpy/ticket/2066 >> http://projects.scipy.org/numpy/ticket/1588 >> http://projects.scipy.org/numpy/ticket/2076 >> http://projects.scipy.org/numpy/ticket/2101 >> http://projects.scipy.org/numpy/ticket/2108 >> http://projects.scipy.org/numpy/ticket/2150 >> http://projects.scipy.org/numpy/ticket/2189 >> >> I would like to thank Ralf for a lot of help with creating binaries >> and other help for this release. >> >> Cheers, >> Ondrej >> >> > > Hi Ondrej, > > will numpy 1.7.0 final support Python 3.3? The recent patch in the > master branch seems to work well. > > I tested a win-amd64-py2.7\msvc9\MKL build of the numpy > maintenance/1.7.x branch against a number of package binaries from > . > > The test results are at > . > For comparison, the tests against numpy-MKL-1.6.2 are at > . > > Besides some numpy 1.7.x test errors due to RuntimeWarning and > DeprecationWarning, there are hundreds of "RuntimeWarning (numpy.dtype > size changed, may indicate binary incompatibility)" when loading Cython > extensions. > > There are additional test failures in scipy, statsmodels, bottleneck, > skimage, vigra, and mahotas. I did not check in detail or with existing > tickets (http://projects.scipy.org/ is timing out or responding with > HTTP 500 status). Thanks Christoph, All the statsmodels errors (14) look like http://projects.scipy.org/numpy/ticket/2187 recarray/structured dtype view Josef > > Other packages test OK against numpy 1.7.x, e.g. PIL, PyGame, > matplotlib, Pandas, tables, and numexpr. > > Hope it helps. > > Christoph > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From ralf.gommers at gmail.com Wed Aug 22 04:10:30 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 22 Aug 2012 10:10:30 +0200 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: <5033E886.2050400@uci.edu> References: <5033E886.2050400@uci.edu> Message-ID: On Tue, Aug 21, 2012 at 9:59 PM, Christoph Gohlke wrote: > On 8/21/2012 9:24 AM, Ond?ej ?ert?k wrote: > > Hi, > > > > I'm pleased to announce the availability of the first beta release of > > NumPy 1.7.0b1. > > > > Sources and binary installers can be found at > > https://sourceforge.net/projects/numpy/files/NumPy/1.7.0b1/ > > > > Please test this release and report any issues on the numpy-discussion > > mailing list. The following problems are known and > > we'll work on fixing them before the final release: > > > > http://projects.scipy.org/numpy/ticket/2187 > > http://projects.scipy.org/numpy/ticket/2185 > > http://projects.scipy.org/numpy/ticket/2066 > > http://projects.scipy.org/numpy/ticket/1588 > > http://projects.scipy.org/numpy/ticket/2076 > > http://projects.scipy.org/numpy/ticket/2101 > > http://projects.scipy.org/numpy/ticket/2108 > > http://projects.scipy.org/numpy/ticket/2150 > > http://projects.scipy.org/numpy/ticket/2189 > > > > I would like to thank Ralf for a lot of help with creating binaries > > and other help for this release. > > > > Cheers, > > Ondrej > > > > > > Hi Ondrej, > > will numpy 1.7.0 final support Python 3.3? The recent patch in the > master branch seems to work well. > > I tested a win-amd64-py2.7\msvc9\MKL build of the numpy > maintenance/1.7.x branch against a number of package binaries from > . > > The test results are at > < > http://www.lfd.uci.edu/~gohlke/pythonlibs/tests/20120821-win-amd64-py2.7-numpy-MKL-1.7.0rc1.dev-28ffac7/ > >. > For comparison, the tests against numpy-MKL-1.6.2 are at > >. > > Besides some numpy 1.7.x test errors due to RuntimeWarning and > DeprecationWarning Flipped the release switch on those in commit ea23de8, errors should be gone now. > , there are hundreds of "RuntimeWarning (numpy.dtype > size changed, may indicate binary incompatibility)" when loading Cython > extensions. > This looks really bad. Last discussion I could find on this topic on the Cython list: http://grokbase.com/t/python/cython-devel/121ygkvfhc/cython-upcoming-issues-with-numpy-deprecated-apis-and-cythons-sizeof-checks At this point I think it would be a good idea to install a warning filter for this on import of numpy and leave it in place. And also in the test runner. Even if Cython would fix it now, that doesn't help for all binaries of existing releases of projects that depend on numpy. Ralf > There are additional test failures in scipy, statsmodels, bottleneck, > skimage, vigra, and mahotas. I did not check in detail or with existing > tickets (http://projects.scipy.org/ is timing out or responding with > HTTP 500 status). > > Other packages test OK against numpy 1.7.x, e.g. PIL, PyGame, > matplotlib, Pandas, tables, and numexpr. > > Hope it helps. > > Christoph > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Aug 22 04:20:09 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 22 Aug 2012 10:20:09 +0200 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: <5033E886.2050400@uci.edu> References: <5033E886.2050400@uci.edu> Message-ID: On Tue, Aug 21, 2012 at 9:59 PM, Christoph Gohlke wrote: > On 8/21/2012 9:24 AM, Ond?ej ?ert?k wrote: > > Hi, > > > > I'm pleased to announce the availability of the first beta release of > > NumPy 1.7.0b1. > > > > Sources and binary installers can be found at > > https://sourceforge.net/projects/numpy/files/NumPy/1.7.0b1/ > > > > Please test this release and report any issues on the numpy-discussion > > mailing list. The following problems are known and > > we'll work on fixing them before the final release: > > > > http://projects.scipy.org/numpy/ticket/2187 > > http://projects.scipy.org/numpy/ticket/2185 > > http://projects.scipy.org/numpy/ticket/2066 > > http://projects.scipy.org/numpy/ticket/1588 > > http://projects.scipy.org/numpy/ticket/2076 > > http://projects.scipy.org/numpy/ticket/2101 > > http://projects.scipy.org/numpy/ticket/2108 > > http://projects.scipy.org/numpy/ticket/2150 > > http://projects.scipy.org/numpy/ticket/2189 > > > > I would like to thank Ralf for a lot of help with creating binaries > > and other help for this release. > > > > Cheers, > > Ondrej > > > > > > Hi Ondrej, > > will numpy 1.7.0 final support Python 3.3? The recent patch in the > master branch seems to work well. > > I tested a win-amd64-py2.7\msvc9\MKL build of the numpy > maintenance/1.7.x branch against a number of package binaries from > . > > The test results are at > < > http://www.lfd.uci.edu/~gohlke/pythonlibs/tests/20120821-win-amd64-py2.7-numpy-MKL-1.7.0rc1.dev-28ffac7/ > >. > For comparison, the tests against numpy-MKL-1.6.2 are at > >. > ... > > There are additional test failures in scipy, statsmodels, bottleneck, > skimage, vigra, and mahotas. I did not check in detail or with existing > tickets (http://projects.scipy.org/ is timing out or responding with > HTTP 500 status). > None of the scipy ones are serious or due to changes in numpy as far as I can tell. The one thing to do in numpy is silence this warning: DeprecationWarning: The compiler package is deprecated and removed in Python 3.x. Thanks for all those tests Christoph, extremely useful! Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Aug 22 04:59:42 2012 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 22 Aug 2012 10:59:42 +0200 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> References: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> Message-ID: On Tue, Aug 21, 2012 at 12:51 AM, Travis Oliphant wrote: > I'm actually not sure, why. I think the issue is making sure that the > release manager can actually "build" NumPy without having to buy a > particular compiler. > That would help, yes. MS Express doesn't work under Wine last time I checked by the way. However, the issue is more than just one license. There's a large number of packages that depend on numpy and provide binaries. If they can't make those compatible with numpy ones, that's a problem. Users will first install numpy 64-bit, and then later find out that part of the scientific Python stack isn't available to them anymore. > > But, I would rather have official builds of NumPy for all platforms with a > compiler paid for by a NumPy-sponsor than not have them. > I would only want to have those if I can have the whole stack. If that's possible, then (if Christoph is okay with it) why not upload Christoph's binaries also to SF and call them "official" then? I don't see the point in duplicating his efforts. Ralf > -Travis > > > > On Aug 20, 2012, at 5:28 PM, Chris Barker wrote: > > > On Sun, Aug 19, 2012 at 11:39 AM, Ralf Gommers > The problem is that, > > unlike 32-bit builds, they can't be made with open > >> source compilers on Windows. So unless we're okay with that, > > > > Why does it have to be built with open-source compilers? we're > > building against the python.org python, yes? Which is built with the > > MS compiler -- so why the insistance on open-source? > > > > -Chris > > > > > > > > -- > > > > Christopher Barker, Ph.D. > > Oceanographer > > > > Emergency Response Division > > NOAA/NOS/OR&R (206) 526-6959 voice > > 7600 Sand Point Way NE (206) 526-6329 fax > > Seattle, WA 98115 (206) 526-6317 main reception > > > > Chris.Barker at noaa.gov > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Aug 22 06:40:50 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 22 Aug 2012 11:40:50 +0100 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> Message-ID: On Tue, Aug 21, 2012 at 12:15 AM, Chris Barker wrote: > On Mon, Aug 20, 2012 at 3:51 PM, Travis Oliphant wrote: >> I'm actually not sure, why. I think the issue is making sure that the release manager can actually "build" NumPy without having to buy a particular compiler. > > The MS Express editions, while not open source, are free-to-use, and work fine. > > Not sure what what do about Fortran, though, but that's a scipy, not a > numpy issue, yes? fortran is the issue. Having one or two licenses of say Intel Fortran compiler is not enough because it makes it difficult for people to build on top of scipy. David From numpy-discussion at maubp.freeserve.co.uk Wed Aug 22 07:17:36 2012 From: numpy-discussion at maubp.freeserve.co.uk (Peter) Date: Wed, 22 Aug 2012 12:17:36 +0100 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> Message-ID: On Wed, Aug 22, 2012 at 11:40 AM, David Cournapeau wrote: > On Tue, Aug 21, 2012 at 12:15 AM, Chris Barker wrote: >> On Mon, Aug 20, 2012 at 3:51 PM, Travis Oliphant wrote: >>> I'm actually not sure, why. I think the issue is making sure that >>> the release manager can actually "build" NumPy without having >>> to buy a particular compiler. >> >> The MS Express editions, while not open source, are free-to-use, >> and work fine. >> >> Not sure what what do about Fortran, though, but that's a scipy, not a >> numpy issue, yes? > > fortran is the issue. Having one or two licenses of say Intel Fortran > compiler is not enough because it makes it difficult for people to > build on top of scipy. > > David For those users/developers/packages using NumPy but not SciPy, does this matter? Having just official NumPy 64bit Windows packages would still be very welcome. Is the problem that whatever route NumPy goes down will have potential implications/restrictions for how SciPy could proceed? Peter From cournape at gmail.com Wed Aug 22 07:47:52 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 22 Aug 2012 12:47:52 +0100 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> Message-ID: On Wed, Aug 22, 2012 at 12:17 PM, Peter wrote: > On Wed, Aug 22, 2012 at 11:40 AM, David Cournapeau wrote: >> On Tue, Aug 21, 2012 at 12:15 AM, Chris Barker wrote: >>> On Mon, Aug 20, 2012 at 3:51 PM, Travis Oliphant wrote: >>>> I'm actually not sure, why. I think the issue is making sure that >>>> the release manager can actually "build" NumPy without having >>>> to buy a particular compiler. >>> >>> The MS Express editions, while not open source, are free-to-use, >>> and work fine. >>> >>> Not sure what what do about Fortran, though, but that's a scipy, not a >>> numpy issue, yes? >> >> fortran is the issue. Having one or two licenses of say Intel Fortran >> compiler is not enough because it makes it difficult for people to >> build on top of scipy. >> >> David > > For those users/developers/packages using NumPy but not SciPy, > does this matter? Having just official NumPy 64bit Windows packages > would still be very welcome. > > Is the problem that whatever route NumPy goes down will have > potential implications/restrictions for how SciPy could proceed? Yes. David From charlesr.harris at gmail.com Wed Aug 22 09:26:48 2012 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 22 Aug 2012 07:26:48 -0600 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: <5033E886.2050400@uci.edu> References: <5033E886.2050400@uci.edu> Message-ID: On Tue, Aug 21, 2012 at 1:59 PM, Christoph Gohlke wrote: > On 8/21/2012 9:24 AM, Ond?ej ?ert?k wrote: > > Hi, > > > > I'm pleased to announce the availability of the first beta release of > > NumPy 1.7.0b1. > > > > Sources and binary installers can be found at > > https://sourceforge.net/projects/numpy/files/NumPy/1.7.0b1/ > > > > Please test this release and report any issues on the numpy-discussion > > mailing list. The following problems are known and > > we'll work on fixing them before the final release: > > > > http://projects.scipy.org/numpy/ticket/2187 > > http://projects.scipy.org/numpy/ticket/2185 > > http://projects.scipy.org/numpy/ticket/2066 > > http://projects.scipy.org/numpy/ticket/1588 > > http://projects.scipy.org/numpy/ticket/2076 > > http://projects.scipy.org/numpy/ticket/2101 > > http://projects.scipy.org/numpy/ticket/2108 > > http://projects.scipy.org/numpy/ticket/2150 > > http://projects.scipy.org/numpy/ticket/2189 > > > > I would like to thank Ralf for a lot of help with creating binaries > > and other help for this release. > > > > Cheers, > > Ondrej > > > > > > Hi Ondrej, > > will numpy 1.7.0 final support Python 3.3? The recent patch in the > master branch seems to work well. > > I tested a win-amd64-py2.7\msvc9\MKL build of the numpy > maintenance/1.7.x branch against a number of package binaries from > . > > The test results are at > < > http://www.lfd.uci.edu/~gohlke/pythonlibs/tests/20120821-win-amd64-py2.7-numpy-MKL-1.7.0rc1.dev-28ffac7/ > >. > What version of Numpy were the test packages compiled against? > For comparison, the tests against numpy-MKL-1.6.2 are at > >. > > Besides some numpy 1.7.x test errors due to RuntimeWarning and > DeprecationWarning, there are hundreds of "RuntimeWarning (numpy.dtype > size changed, may indicate binary incompatibility)" when loading Cython > extensions. > > There are additional test failures in scipy, statsmodels, bottleneck, > skimage, vigra, and mahotas. I did not check in detail or with existing > tickets (http://projects.scipy.org/ is timing out or responding with > HTTP 500 status). > > Other packages test OK against numpy 1.7.x, e.g. PIL, PyGame, > matplotlib, Pandas, tables, and numexpr. > > Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Aug 22 10:14:30 2012 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 22 Aug 2012 15:14:30 +0100 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> Message-ID: On Wed, Aug 22, 2012 at 11:40 AM, David Cournapeau wrote: > On Tue, Aug 21, 2012 at 12:15 AM, Chris Barker wrote: >> On Mon, Aug 20, 2012 at 3:51 PM, Travis Oliphant wrote: >>> I'm actually not sure, why. I think the issue is making sure that the release manager can actually "build" NumPy without having to buy a particular compiler. >> >> The MS Express editions, while not open source, are free-to-use, and work fine. >> >> Not sure what what do about Fortran, though, but that's a scipy, not a >> numpy issue, yes? > > fortran is the issue. Having one or two licenses of say Intel Fortran > compiler is not enough because it makes it difficult for people to > build on top of scipy. There seems to be a 64-bit version of mingw -- I assume this runs into the same issue that numpy has with new versions of 32-bit mingw? Would it work if that issue were fixed? -n From aronne.merrelli at gmail.com Wed Aug 22 10:17:46 2012 From: aronne.merrelli at gmail.com (Aronne Merrelli) Date: Wed, 22 Aug 2012 09:17:46 -0500 Subject: [Numpy-discussion] __array_wrap__ and dot Message-ID: Hello list, I posted an issue many months ago [1] about confusing __array_wrap__ behavior in a subclass of np.matrix. Since that time I wasn't really using the code that used the matrix subclass, but lately I have starting using the code so I've run into the issue again. Looking into a little more detail, I think the root of the problem is how __array_wrap__ interacts with the dot functions. Operations like M*2 will be processed differently depending on whether M is a ndarray or matrix subclass; it appears that a matrix will always call np.dot for that operation, while an ndarray will go through np.multiply. Since dot acts differently than multiply here, you get different results between the two subclasses. I updated the gist to show the problem [2]. The output I get on my system is pasted at the bottom. It looks like __array_wrap__ gets called sometimes, depending on the type of one of the arguments. I'm not seeing any logical pattern to it, which makes it confusing. My expectation (perhaps incorrect) is that __array_wrap__ should be invoked any time dot is called. I guess that depends on whether dot should be considered a ufunc - I don't know the answer to that question. If someone with more knowledge of the internals could take a look at this, that would be great - I think this is a bug, but I'm not really sure what is the expected behavior here. Thanks, Aronne [1] http://mail.scipy.org/pipermail/numpy-discussion/2011-December/059665.html [2] https://gist.github.com/1511354 Output from run_test (see the gist code) using ipython: In [1]: import matrix_array_wrap_test as mtest; import numpy as np In [2]: np.__version__ Out[2]: '1.6.1' In [3]: np.dot Out[3]: In [4]: mtest.run_test() aw called? | creation | use Yes | o=ArrSubClass(2) | o2 = o + 2 Yes | o=ArrSubClass(2) | o2 = o + 2.0 Yes | o=ArrSubClass(2) | o2 = o * 2 Yes | o=ArrSubClass(2) | o2 = o * 2.0 Yes | o=ArrSubClass(2) | o2 = o * o.T No | o=ArrSubClass(2) | o2 = np.dot(o,2) No | o=ArrSubClass(2) | o2 = np.dot(o,2.0) No | o=ArrSubClass(2) | o2 = np.dot(o,o.T) Yes | o=ArrSubClass(2) | o2 = np.core.multiarray.dot(o,2) Yes | o=ArrSubClass(2) | o2 = np.core.multiarray.dot(o,2.0) No | o=ArrSubClass(2) | o2 = np.core.multiarray.dot(o,o.T) Yes | o=MatSubClass([1, 1]) | o2 = o + 2 Yes | o=MatSubClass([1, 1]) | o2 = o + 2.0 Yes | o=MatSubClass([1, 1]) | o2 = o * 2 No | o=MatSubClass([1, 1]) | o2 = o * 2.0 No | o=MatSubClass([1, 1]) | o2 = o * o.T Yes | o=MatSubClass([1, 1]) | o2 = np.dot(o,2) No | o=MatSubClass([1, 1]) | o2 = np.dot(o,2.0) No | o=MatSubClass([1, 1]) | o2 = np.dot(o,o.T) Yes | o=MatSubClass([1, 1]) | o2 = np.core.multiarray.dot(o,2) Yes | o=MatSubClass([1, 1]) | o2 = np.core.multiarray.dot(o,2.0) No | o=MatSubClass([1, 1]) | o2 = np.core.multiarray.dot(o,o.T) Yes | o=MatSubClass([1.0, 1.0]) | o2 = o + 2 Yes | o=MatSubClass([1.0, 1.0]) | o2 = o + 2.0 No | o=MatSubClass([1.0, 1.0]) | o2 = o * 2 No | o=MatSubClass([1.0, 1.0]) | o2 = o * 2.0 No | o=MatSubClass([1.0, 1.0]) | o2 = o * o.T No | o=MatSubClass([1.0, 1.0]) | o2 = np.dot(o,2) No | o=MatSubClass([1.0, 1.0]) | o2 = np.dot(o,2.0) No | o=MatSubClass([1.0, 1.0]) | o2 = np.dot(o,o.T) Yes | o=MatSubClass([1.0, 1.0]) | o2 = np.core.multiarray.dot(o,2) Yes | o=MatSubClass([1.0, 1.0]) | o2 = np.core.multiarray.dot(o,2.0) No | o=MatSubClass([1.0, 1.0]) | o2 = np.core.multiarray.dot(o,o.T) From cournape at gmail.com Wed Aug 22 10:22:52 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 22 Aug 2012 15:22:52 +0100 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> Message-ID: On Wed, Aug 22, 2012 at 3:14 PM, Nathaniel Smith wrote: > On Wed, Aug 22, 2012 at 11:40 AM, David Cournapeau wrote: >> On Tue, Aug 21, 2012 at 12:15 AM, Chris Barker wrote: >>> On Mon, Aug 20, 2012 at 3:51 PM, Travis Oliphant wrote: >>>> I'm actually not sure, why. I think the issue is making sure that the release manager can actually "build" NumPy without having to buy a particular compiler. >>> >>> The MS Express editions, while not open source, are free-to-use, and work fine. >>> >>> Not sure what what do about Fortran, though, but that's a scipy, not a >>> numpy issue, yes? >> >> fortran is the issue. Having one or two licenses of say Intel Fortran >> compiler is not enough because it makes it difficult for people to >> build on top of scipy. > > There seems to be a 64-bit version of mingw -- I assume this runs into > the same issue that numpy has with new versions of 32-bit mingw? Would > it work if that issue were fixed? It does have the same issue IIRC, but there was also the issue that scipy built with mingw-w64 did not work at all at that time. Could be different today, I have not looked into it for quite a while now. David From travis at continuum.io Wed Aug 22 10:25:09 2012 From: travis at continuum.io (Travis Oliphant) Date: Wed, 22 Aug 2012 09:25:09 -0500 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> Message-ID: On Aug 22, 2012, at 3:59 AM, Ralf Gommers wrote: > > > On Tue, Aug 21, 2012 at 12:51 AM, Travis Oliphant wrote: > I'm actually not sure, why. I think the issue is making sure that the release manager can actually "build" NumPy without having to buy a particular compiler. > > That would help, yes. MS Express doesn't work under Wine last time I checked by the way. > > However, the issue is more than just one license. There's a large number of packages that depend on numpy and provide binaries. If they can't make those compatible with numpy ones, that's a problem. Users will first install numpy 64-bit, and then later find out that part of the scientific Python stack isn't available to them anymore. > As far as I understand, you don't *have* to build all downstream dependencies with the same compiler that NumPy was built with unless your extension relies on the way C-functions pass structures on the stack (not pointers to them, but structures as a whole) or if it relies on the representation of FILE*. At one time all structures were passed as pointers specifically for this reason. The FILE* situation is a problem, but most extensions don't use NumPy C-API calls that have a FILE* argument. > > But, I would rather have official builds of NumPy for all platforms with a compiler paid for by a NumPy-sponsor than not have them. > > I would only want to have those if I can have the whole stack. If that's possible, then (if Christoph is okay with it) why not upload Christoph's binaries also to SF and call them "official" then? I don't see the point in duplicating his efforts. > Yes, I agree. I would really like there to be several build machines, maintained by the community (NumFOCUS could sponsor this) which can be used to build windows binaries from a build specification. I think bento is a good tool. We could accelerate its use by offering machines that automatically build binaries for packages with bento scripts. If someone is interested in doing a project like this for the community, let me know as there are funds available to sponsor it --- just the lack of someone to see it through. -Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Aug 22 10:28:38 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 22 Aug 2012 15:28:38 +0100 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> Message-ID: On Wed, Aug 22, 2012 at 3:25 PM, Travis Oliphant wrote: > > On Aug 22, 2012, at 3:59 AM, Ralf Gommers wrote: > > > > On Tue, Aug 21, 2012 at 12:51 AM, Travis Oliphant > wrote: >> >> I'm actually not sure, why. I think the issue is making sure that the >> release manager can actually "build" NumPy without having to buy a >> particular compiler. > > > That would help, yes. MS Express doesn't work under Wine last time I checked > by the way. > > However, the issue is more than just one license. There's a large number of > packages that depend on numpy and provide binaries. If they can't make those > compatible with numpy ones, that's a problem. Users will first install numpy > 64-bit, and then later find out that part of the scientific Python stack > isn't available to them anymore. > > > > As far as I understand, you don't *have* to build all downstream > dependencies with the same compiler that NumPy was built with unless your > extension relies on the way C-functions pass structures on the stack (not > pointers to them, but structures as a whole) or if it relies on the > representation of FILE*. At one time all structures were passed as > pointers specifically for this reason. The FILE* situation is a problem, > but most extensions don't use NumPy C-API calls that have a FILE* argument. It is much more pervasive than that, unfortunately. And for fortran, it is much worse, because if we build scipy or numpy with Intel Fortran, I think we pretty much force everyone to use intel fortran for *any* binary on top of them. David From travis at continuum.io Wed Aug 22 10:35:29 2012 From: travis at continuum.io (Travis Oliphant) Date: Wed, 22 Aug 2012 09:35:29 -0500 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> Message-ID: <69B063D5-16C0-43EC-9E0B-BC295545373C@continuum.io> On Aug 22, 2012, at 9:28 AM, David Cournapeau wrote: > On Wed, Aug 22, 2012 at 3:25 PM, Travis Oliphant wrote: >> >> On Aug 22, 2012, at 3:59 AM, Ralf Gommers wrote: >> >> >> >> On Tue, Aug 21, 2012 at 12:51 AM, Travis Oliphant >> wrote: >>> >>> I'm actually not sure, why. I think the issue is making sure that the >>> release manager can actually "build" NumPy without having to buy a >>> particular compiler. >> >> >> That would help, yes. MS Express doesn't work under Wine last time I checked >> by the way. >> >> However, the issue is more than just one license. There's a large number of >> packages that depend on numpy and provide binaries. If they can't make those >> compatible with numpy ones, that's a problem. Users will first install numpy >> 64-bit, and then later find out that part of the scientific Python stack >> isn't available to them anymore. >> >> >> >> As far as I understand, you don't *have* to build all downstream >> dependencies with the same compiler that NumPy was built with unless your >> extension relies on the way C-functions pass structures on the stack (not >> pointers to them, but structures as a whole) or if it relies on the >> representation of FILE*. At one time all structures were passed as >> pointers specifically for this reason. The FILE* situation is a problem, >> but most extensions don't use NumPy C-API calls that have a FILE* argument. > > It is much more pervasive than that, unfortunately. And for fortran, > it is much worse, because if we build scipy or numpy with Intel > Fortran, I think we pretty much force everyone to use intel fortran > for *any* binary on top of them. Can you be more specific? Does the calling convention for C-routines created with Intel Fortran differ so much? -Travis > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From cournape at gmail.com Wed Aug 22 11:06:13 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 22 Aug 2012 16:06:13 +0100 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: <69B063D5-16C0-43EC-9E0B-BC295545373C@continuum.io> References: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> <69B063D5-16C0-43EC-9E0B-BC295545373C@continuum.io> Message-ID: On Wed, Aug 22, 2012 at 3:35 PM, Travis Oliphant wrote: > On Aug 22, 2012, at 9:28 AM, David Cournapeau wrote: > >> On Wed, Aug 22, 2012 at 3:25 PM, Travis Oliphant wrote: >>> >>> On Aug 22, 2012, at 3:59 AM, Ralf Gommers wrote: >>> >>> >>> >>> On Tue, Aug 21, 2012 at 12:51 AM, Travis Oliphant >>> wrote: >>>> >>>> I'm actually not sure, why. I think the issue is making sure that the >>>> release manager can actually "build" NumPy without having to buy a >>>> particular compiler. >>> >>> >>> That would help, yes. MS Express doesn't work under Wine last time I checked >>> by the way. >>> >>> However, the issue is more than just one license. There's a large number of >>> packages that depend on numpy and provide binaries. If they can't make those >>> compatible with numpy ones, that's a problem. Users will first install numpy >>> 64-bit, and then later find out that part of the scientific Python stack >>> isn't available to them anymore. >>> >>> >>> >>> As far as I understand, you don't *have* to build all downstream >>> dependencies with the same compiler that NumPy was built with unless your >>> extension relies on the way C-functions pass structures on the stack (not >>> pointers to them, but structures as a whole) or if it relies on the >>> representation of FILE*. At one time all structures were passed as >>> pointers specifically for this reason. The FILE* situation is a problem, >>> but most extensions don't use NumPy C-API calls that have a FILE* argument. >> >> It is much more pervasive than that, unfortunately. And for fortran, >> it is much worse, because if we build scipy or numpy with Intel >> Fortran, I think we pretty much force everyone to use intel fortran >> for *any* binary on top of them. > > Can you be more specific? Does the calling convention for C-routines created with Intel Fortran differ so much? If we were to use intel, it would be with MS compilers, and I have never been able to link a gfortran program with visual studio. I will try to take a look at it again during euroscipy, David From ondrej.certik at gmail.com Wed Aug 22 11:12:29 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Wed, 22 Aug 2012 08:12:29 -0700 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: <69B063D5-16C0-43EC-9E0B-BC295545373C@continuum.io> References: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> <69B063D5-16C0-43EC-9E0B-BC295545373C@continuum.io> Message-ID: On Wed, Aug 22, 2012 at 7:35 AM, Travis Oliphant wrote: > On Aug 22, 2012, at 9:28 AM, David Cournapeau wrote: > >> On Wed, Aug 22, 2012 at 3:25 PM, Travis Oliphant wrote: >>> >>> On Aug 22, 2012, at 3:59 AM, Ralf Gommers wrote: >>> >>> >>> >>> On Tue, Aug 21, 2012 at 12:51 AM, Travis Oliphant >>> wrote: >>>> >>>> I'm actually not sure, why. I think the issue is making sure that the >>>> release manager can actually "build" NumPy without having to buy a >>>> particular compiler. >>> >>> >>> That would help, yes. MS Express doesn't work under Wine last time I checked >>> by the way. >>> >>> However, the issue is more than just one license. There's a large number of >>> packages that depend on numpy and provide binaries. If they can't make those >>> compatible with numpy ones, that's a problem. Users will first install numpy >>> 64-bit, and then later find out that part of the scientific Python stack >>> isn't available to them anymore. >>> >>> >>> >>> As far as I understand, you don't *have* to build all downstream >>> dependencies with the same compiler that NumPy was built with unless your >>> extension relies on the way C-functions pass structures on the stack (not >>> pointers to them, but structures as a whole) or if it relies on the >>> representation of FILE*. At one time all structures were passed as >>> pointers specifically for this reason. The FILE* situation is a problem, >>> but most extensions don't use NumPy C-API calls that have a FILE* argument. >> >> It is much more pervasive than that, unfortunately. And for fortran, >> it is much worse, because if we build scipy or numpy with Intel >> Fortran, I think we pretty much force everyone to use intel fortran >> for *any* binary on top of them. > > Can you be more specific? Does the calling convention for C-routines created with Intel Fortran differ so much? I have the same question as Travis. If you are interested about ABI for Fortran, I have created this FAQ: http://fortran90.org/src/faq.html#are-fortran-compilers-abi-compatible Since NumPy only calls the Fortran routines, but does not expose them, then the only issue is how to build NumPy with (let's say) Intel Fortran. That's a separate issue. Once NumPy is built, then nobody cares, because they only need to interface the C routines, if anything at all. As far as Fortran runtime library goes (which of course is different for Intel and gfortran), I am currently not sure whether it is possible to mix them, but I think you probably can, if numpy .so is using Intel, and my own .so is using gfortran. Finally, if NumPy is build using MSVC, does this force everybody to use the same C compiler? I thought that C compilers are ABI compatible, at least Intel C and gfortran C are ABI compatible. Is MSVC different? Btw, to correctly call Fortran from C, one should always be using the iso_c_binding module, as explained here: http://fortran90.org/src/best-practices.html#interfacing-with-c Then the Fortran code becomes just like any other C library. Ondrej From cgohlke at uci.edu Wed Aug 22 11:42:09 2012 From: cgohlke at uci.edu (Christoph Gohlke) Date: Wed, 22 Aug 2012 08:42:09 -0700 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: References: <5033E886.2050400@uci.edu> Message-ID: <5034FDD1.1090708@uci.edu> On 8/22/2012 6:26 AM, Charles R Harris wrote: > > > On Tue, Aug 21, 2012 at 1:59 PM, Christoph Gohlke > wrote: > > On 8/21/2012 9:24 AM, Ond?ej ?ert?k wrote: > > Hi, > > > > I'm pleased to announce the availability of the first beta release of > > NumPy 1.7.0b1. > > > > Sources and binary installers can be found at > >https://sourceforge.net/projects/numpy/files/NumPy/1.7.0b1/ > > > > Please test this release and report any issues on the numpy-discussion > > mailing list. The following problems are known and > > we'll work on fixing them before the final release: > > > >http://projects.scipy.org/numpy/ticket/2187 > >http://projects.scipy.org/numpy/ticket/2185 > >http://projects.scipy.org/numpy/ticket/2066 > >http://projects.scipy.org/numpy/ticket/1588 > >http://projects.scipy.org/numpy/ticket/2076 > >http://projects.scipy.org/numpy/ticket/2101 > >http://projects.scipy.org/numpy/ticket/2108 > >http://projects.scipy.org/numpy/ticket/2150 > >http://projects.scipy.org/numpy/ticket/2189 > > > > I would like to thank Ralf for a lot of help with creating binaries > > and other help for this release. > > > > Cheers, > > Ondrej > > > > > > Hi Ondrej, > > will numpy 1.7.0 final support Python 3.3? The recent patch in the > master branch seems to work well. > > I tested a win-amd64-py2.7\msvc9\MKL build of the numpy > maintenance/1.7.x branch against a number of package binaries from > >. > > The test results are at > >. > > > What version of Numpy were the test packages compiled against? numpy-MKL-1.6.x Christoph > > For comparison, the tests against numpy-MKL-1.6.2 are at > >. > > Besides some numpy 1.7.x test errors due to RuntimeWarning and > DeprecationWarning, there are hundreds of "RuntimeWarning (numpy.dtype > size changed, may indicate binary incompatibility)" when loading Cython > extensions. > > There are additional test failures in scipy, statsmodels, bottleneck, > skimage, vigra, and mahotas. I did not check in detail or with existing > tickets (http://projects.scipy.org/ is timing out or responding with > HTTP 500 status). > > Other packages test OK against numpy 1.7.x, e.g. PIL, PyGame, > matplotlib, Pandas, tables, and numexpr. > > > Chuck > From cournape at gmail.com Wed Aug 22 11:50:43 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 22 Aug 2012 16:50:43 +0100 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> <69B063D5-16C0-43EC-9E0B-BC295545373C@continuum.io> Message-ID: On Wed, Aug 22, 2012 at 4:12 PM, Ond?ej ?ert?k wrote: > On Wed, Aug 22, 2012 at 7:35 AM, Travis Oliphant wrote: >> On Aug 22, 2012, at 9:28 AM, David Cournapeau wrote: >> >>> On Wed, Aug 22, 2012 at 3:25 PM, Travis Oliphant wrote: >>>> >>>> On Aug 22, 2012, at 3:59 AM, Ralf Gommers wrote: >>>> >>>> >>>> >>>> On Tue, Aug 21, 2012 at 12:51 AM, Travis Oliphant >>>> wrote: >>>>> >>>>> I'm actually not sure, why. I think the issue is making sure that the >>>>> release manager can actually "build" NumPy without having to buy a >>>>> particular compiler. >>>> >>>> >>>> That would help, yes. MS Express doesn't work under Wine last time I checked >>>> by the way. >>>> >>>> However, the issue is more than just one license. There's a large number of >>>> packages that depend on numpy and provide binaries. If they can't make those >>>> compatible with numpy ones, that's a problem. Users will first install numpy >>>> 64-bit, and then later find out that part of the scientific Python stack >>>> isn't available to them anymore. >>>> >>>> >>>> >>>> As far as I understand, you don't *have* to build all downstream >>>> dependencies with the same compiler that NumPy was built with unless your >>>> extension relies on the way C-functions pass structures on the stack (not >>>> pointers to them, but structures as a whole) or if it relies on the >>>> representation of FILE*. At one time all structures were passed as >>>> pointers specifically for this reason. The FILE* situation is a problem, >>>> but most extensions don't use NumPy C-API calls that have a FILE* argument. >>> >>> It is much more pervasive than that, unfortunately. And for fortran, >>> it is much worse, because if we build scipy or numpy with Intel >>> Fortran, I think we pretty much force everyone to use intel fortran >>> for *any* binary on top of them. >> >> Can you be more specific? Does the calling convention for C-routines created with Intel Fortran differ so much? > > > I have the same question as Travis. If you are interested about ABI > for Fortran, I have created this FAQ: > > http://fortran90.org/src/faq.html#are-fortran-compilers-abi-compatible > > Since NumPy only calls the Fortran routines, but does not expose them, > then the only issue is how to build NumPy with (let's say) Intel > Fortran. That's a separate issue. > Once NumPy is built, then nobody cares, because they only need to > interface the C routines, if anything at all. > > As far as Fortran runtime library goes (which of course is different > for Intel and gfortran), I am currently not sure whether it is > possible to mix them, but I think you probably can, if numpy .so is > using Intel, and my own .so is using gfortran. > > > Finally, if NumPy is build using MSVC, does this force everybody to > use the same C compiler? I thought that C compilers are ABI > compatible, at least Intel C and gfortran C are ABI compatible. Is > MSVC different? > > Btw, to correctly call Fortran from C, one should always be using the > iso_c_binding module, as explained here: > > http://fortran90.org/src/best-practices.html#interfacing-with-c > > Then the Fortran code becomes just like any other C library. It is unfortunately more complicated than that. 1 regarding fortran runtimes: I have never been able to link a gfortran object file with Visual Studio linker (link.exe). 2 mixing runtimes is never a good idea, because it becomes difficult to avoid passing a pointer from one runtime to the other. Intel fortran compiler obviously knows how to deal with the C runtime of Visual Studio, but gfortran doesn't. 3 gcc and visual studio are ABI compatible in a (very) restricted sense: they share the same calling convention at least in C, but that's pretty much it. Because having multiple copies of a runtime is so common on windows, you cannot easily pass objects between dlls. Travis mentioned FILE*, but that's also true for pointers returned by malloc, file descriptors, etc... See this for example: http://stackoverflow.com/questions/1052491/c-runtime-objects-dll-boundaries Because of 1, if we have a binary with intel + visual studio, we are effectively forcing everyone on windows to use intel fortran compilers. I would rather have the official binaries using open source toolchains. cheers, David From orion at cora.nwra.com Wed Aug 22 11:55:59 2012 From: orion at cora.nwra.com (Orion Poplawski) Date: Wed, 22 Aug 2012 09:55:59 -0600 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: References: Message-ID: <5035010F.9090507@cora.nwra.com> On 08/21/2012 10:24 AM, Ond?ej ?ert?k wrote: > Hi, > > I'm pleased to announce the availability of the first beta release of > NumPy 1.7.0b1. Currently in trying to support python 3.3 in Fedora Rawhide (F19) and Fedora 18 we are doing: # Regenerate Cython c sources # This is needed with numpy-1.6.2.tar.gz with python 3.3 to avoid an exception # with an import call in the generated .c file in the tarball that uses the # old default of -1: # File "mtrand.pyx", line 126, in init mtrand (numpy/random/mtrand/mtrand.c:20679) # ValueError: level must be >= 0 # due to the changes in import in 3.3 # Regenerating with a newer Cython fixes it: pushd numpy/random/mtrand/ rm -v mtrand.c cython mtrand.pyx popd However with 1.7.0b1 and Cython 0.16 we get: + cython mtrand.pyx Error compiling Cython file: ------------------------------------------------------------ ... PyArray_DIMS(oa) , NPY_DOUBLE) length = PyArray_SIZE(array) array_data = PyArray_DATA(array) itera = PyArray_IterNew(oa) for i from 0 <= i < length: array_data[i] = func(state, ((itera.dataptr))[0]) ^ ------------------------------------------------------------ mtrand.pyx:177:41: Python objects cannot be cast to pointers of primitive types Error compiling Cython file: ------------------------------------------------------------ ... cdef npy_intp i cdef broadcast multi if size is None: multi = PyArray_MultiIterNew(2, oa, ob) array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_DOUBLE) ^ ------------------------------------------------------------ mtrand.pyx:222:59: Cannot convert Python object to 'npy_intp *' Error compiling Cython file: ------------------------------------------------------------ ... cdef npy_intp i cdef broadcast multi if size is None: multi = PyArray_MultiIterNew(3, oa, ob, oc) array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_DOUBLE) ^ ------------------------------------------------------------ mtrand.pyx:275:59: Cannot convert Python object to 'npy_intp *' Error compiling Cython file: ------------------------------------------------------------ ... cdef long *on_data cdef broadcast multi if size is None: multi = PyArray_MultiIterNew(2, on, op) array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) ^ ------------------------------------------------------------ mtrand.pyx:341:59: Cannot convert Python object to 'npy_intp *' Error compiling Cython file: ------------------------------------------------------------ ... cdef double *on_data cdef broadcast multi if size is None: multi = PyArray_MultiIterNew(2, on, op) array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) ^ ------------------------------------------------------------ mtrand.pyx:390:59: Cannot convert Python object to 'npy_intp *' Error compiling Cython file: ------------------------------------------------------------ ... cdef npy_intp i cdef broadcast multi if size is None: multi = PyArray_MultiIterNew(3, on, om, oN) array = PyArray_SimpleNew(multi.nd, multi.dimensions, NPY_LONG) ^ ------------------------------------------------------------ mtrand.pyx:442:59: Cannot convert Python object to 'npy_intp *' Error compiling Cython file: ------------------------------------------------------------ ... PyArray_DIMS(oa), NPY_LONG) length = PyArray_SIZE(array) array_data = PyArray_DATA(array) itera = PyArray_IterNew(oa) for i from 0 <= i < length: array_data[i] = func(state, ((itera.dataptr))[0]) ^ ------------------------------------------------------------ mtrand.pyx:498:41: Python objects cannot be cast to pointers of primitive types Error compiling Cython file: ------------------------------------------------------------ ... flow = PyFloat_AsDouble(low) fhigh = PyFloat_AsDouble(high) if not PyErr_Occurred(): return cont2_array_sc(self.internal_state, rk_uniform, size, flow, fhigh-flow) PyErr_Clear() olow = PyArray_FROM_OTF(low, NPY_DOUBLE, NPY_ARRAY_ALIGNED) ^ ------------------------------------------------------------ mtrand.pyx:1140:75: undeclared name not builtin: NPY_ARRAY_ALIGNED I'm afraid I know nothing about Cython. Looks like there were a lot of changes between 1.6.2 and 1.7.0b1 in mtrand.pyx. I'll try not rebuilding and see if we see the original problem. -- Orion Poplawski Technical Manager 303-415-9701 x222 NWRA, Boulder Office FAX: 303-415-9702 3380 Mitchell Lane orion at nwra.com Boulder, CO 80301 http://www.nwra.com From orion at cora.nwra.com Wed Aug 22 12:26:24 2012 From: orion at cora.nwra.com (Orion Poplawski) Date: Wed, 22 Aug 2012 10:26:24 -0600 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: <5035010F.9090507@cora.nwra.com> References: <5035010F.9090507@cora.nwra.com> Message-ID: <50350830.50909@cora.nwra.com> On 08/22/2012 09:55 AM, Orion Poplawski wrote: > On 08/21/2012 10:24 AM, Ond?ej ?ert?k wrote: >> Hi, >> >> I'm pleased to announce the availability of the first beta release of >> NumPy 1.7.0b1. > > Currently in trying to support python 3.3 in Fedora Rawhide (F19) and Fedora > 18 we are doing: > > # Regenerate Cython c sources > # This is needed with numpy-1.6.2.tar.gz with python 3.3 to avoid an exception > # with an import call in the generated .c file in the tarball that uses the > # old default of -1: > # File "mtrand.pyx", line 126, in init mtrand > (numpy/random/mtrand/mtrand.c:20679) > # ValueError: level must be >= 0 > # due to the changes in import in 3.3 > # Regenerating with a newer Cython fixes it: > pushd numpy/random/mtrand/ > rm -v mtrand.c > cython mtrand.pyx > popd > If I drop the cython generation it builds, but the python 3 test failure I get now is: + /usr/bin/python3 -c 'import pkg_resources, numpy ; numpy.test()' /usr/lib/python3.3/site-packages/nose/core.py:247: ResourceWarning: unclosed file <_io.TextIOWrapper name='/usr/lib/python3.3/site-packages/nose/usage.txt' mode='r' encoding='ANSI_X3.4-1968'> os.path.dirname(__file__), 'usage.txt'), 'r').read() ........................S....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................! .......... ......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................K......................................................................................................................................................................................................................................................! .......... ......................K...................................................E................................................................................................................................................................K....................................................................................................K......................K..........................................................................................................S.............................................................................................................................................................................................................................................................E............................................................................................................................................................................................................................................................................! .......... ............................................................................................................................................./usr/lib64/python3.3/zipfile.py:1513: ResourceWarning: unclosed file <_io.BufferedReader name='/tmp/tmpemcede.npz'> self.fp = None .............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................! .......... .....................................................................................................................................................................................................................................K.................................................... ====================================================================== ERROR: Ticket #16 ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/core/tests/test_regression.py", line 41, in test_pickle_transposed b = pickle.load(f) EOFError ====================================================================== ERROR: Failure: ValueError (can't handle version 187 of numpy.ndarray pickle) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3.3/site-packages/nose/failure.py", line 37, in runTest raise self.exc_class(self.exc_val).with_traceback(self.tb) File "/usr/lib/python3.3/site-packages/nose/loader.py", line 232, in generate for test in g(): File "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/lib/tests/test_format.py", line 429, in test_roundtrip arr2 = roundtrip(arr) File "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/lib/tests/test_format.py", line 420, in roundtrip arr2 = format.read_array(f2) File "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/lib/format.py", line 449, in read_array array = pickle.load(fp) ValueError: can't handle version 187 of numpy.ndarray pickle ---------------------------------------------------------------------- Ran 4418 tests in 31.180s FAILED (KNOWNFAIL=6, SKIP=2, errors=2) -- Orion Poplawski Technical Manager 303-415-9701 x222 NWRA, Boulder Office FAX: 303-415-9702 3380 Mitchell Lane orion at nwra.com Boulder, CO 80301 http://www.nwra.com From ondrej.certik at gmail.com Wed Aug 22 12:46:03 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Wed, 22 Aug 2012 09:46:03 -0700 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> <69B063D5-16C0-43EC-9E0B-BC295545373C@continuum.io> Message-ID: On Wed, Aug 22, 2012 at 8:50 AM, David Cournapeau wrote: > On Wed, Aug 22, 2012 at 4:12 PM, Ond?ej ?ert?k wrote: >> On Wed, Aug 22, 2012 at 7:35 AM, Travis Oliphant wrote: >>> On Aug 22, 2012, at 9:28 AM, David Cournapeau wrote: >>> >>>> On Wed, Aug 22, 2012 at 3:25 PM, Travis Oliphant wrote: >>>>> >>>>> On Aug 22, 2012, at 3:59 AM, Ralf Gommers wrote: >>>>> >>>>> >>>>> >>>>> On Tue, Aug 21, 2012 at 12:51 AM, Travis Oliphant >>>>> wrote: >>>>>> >>>>>> I'm actually not sure, why. I think the issue is making sure that the >>>>>> release manager can actually "build" NumPy without having to buy a >>>>>> particular compiler. >>>>> >>>>> >>>>> That would help, yes. MS Express doesn't work under Wine last time I checked >>>>> by the way. >>>>> >>>>> However, the issue is more than just one license. There's a large number of >>>>> packages that depend on numpy and provide binaries. If they can't make those >>>>> compatible with numpy ones, that's a problem. Users will first install numpy >>>>> 64-bit, and then later find out that part of the scientific Python stack >>>>> isn't available to them anymore. >>>>> >>>>> >>>>> >>>>> As far as I understand, you don't *have* to build all downstream >>>>> dependencies with the same compiler that NumPy was built with unless your >>>>> extension relies on the way C-functions pass structures on the stack (not >>>>> pointers to them, but structures as a whole) or if it relies on the >>>>> representation of FILE*. At one time all structures were passed as >>>>> pointers specifically for this reason. The FILE* situation is a problem, >>>>> but most extensions don't use NumPy C-API calls that have a FILE* argument. >>>> >>>> It is much more pervasive than that, unfortunately. And for fortran, >>>> it is much worse, because if we build scipy or numpy with Intel >>>> Fortran, I think we pretty much force everyone to use intel fortran >>>> for *any* binary on top of them. >>> >>> Can you be more specific? Does the calling convention for C-routines created with Intel Fortran differ so much? >> >> >> I have the same question as Travis. If you are interested about ABI >> for Fortran, I have created this FAQ: >> >> http://fortran90.org/src/faq.html#are-fortran-compilers-abi-compatible >> >> Since NumPy only calls the Fortran routines, but does not expose them, >> then the only issue is how to build NumPy with (let's say) Intel >> Fortran. That's a separate issue. >> Once NumPy is built, then nobody cares, because they only need to >> interface the C routines, if anything at all. >> >> As far as Fortran runtime library goes (which of course is different >> for Intel and gfortran), I am currently not sure whether it is >> possible to mix them, but I think you probably can, if numpy .so is >> using Intel, and my own .so is using gfortran. >> >> >> Finally, if NumPy is build using MSVC, does this force everybody to >> use the same C compiler? I thought that C compilers are ABI >> compatible, at least Intel C and gfortran C are ABI compatible. Is >> MSVC different? >> >> Btw, to correctly call Fortran from C, one should always be using the >> iso_c_binding module, as explained here: >> >> http://fortran90.org/src/best-practices.html#interfacing-with-c >> >> Then the Fortran code becomes just like any other C library. > > It is unfortunately more complicated than that. > > 1 regarding fortran runtimes: I have never been able to link a > gfortran object file with Visual Studio linker (link.exe). You cannot mix the Fortran object .o files between compilers. That will never work, because Fortran compilers are not ABI compatible, see the FAQ. The only way this could work is if you can mix gcc .o file with MSVC linker (I don't know if this is possible or not). If that works, then you should be able to use iso_c_binding in Fortran to produce gcc's compatible .o file and then link it. Which errors were you getting when linking it? Was it a problem with libgfortran runtime (that you of course need to link as well)? This runtime can be a problem. > 2 mixing runtimes is never a good idea, because it becomes difficult > to avoid passing a pointer from one runtime to the other. Intel > fortran compiler obviously knows how to deal with the C runtime of > Visual Studio, but gfortran doesn't. By Fortran runtime I mean this library in my Ubuntu: libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 (0x00007ffaa9ce9000) Here are all the libraries in my typical gfortran program: linux-vdso.so.1 => (0x00007ffffe9ff000) libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 (0x00007ffaa9ce9000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007ffaa99ef000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007ffaa97d8000) libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 (0x00007ffaa95a2000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffaa91e5000) /lib64/ld-linux-x86-64.so.2 (0x00007ffaaa022000) What exactly is the C runtime? libc? The only way to make Fortran talk to C, in a robust and supported way, is to use the iso_c_binding, which makes gcc talk to gfortran and Intel Fortran to Intel C/C++. So if gcc can talk to MSVC, so should gfortran. Can you be more specific what does not work (if you know)? That would be a good thing to put in my FAQ. > 3 gcc and visual studio are ABI compatible in a (very) restricted > sense: they share the same calling convention at least in C, but > that's pretty much it. Because having multiple copies of a runtime is > so common on windows, you cannot easily pass objects between dlls. > Travis mentioned FILE*, but that's also true for pointers returned by > malloc, file descriptors, etc... See this for example: > http://stackoverflow.com/questions/1052491/c-runtime-objects-dll-boundaries That seems to be a C issue. So if that can be resolved, I *think* that Fortran should work too. > > Because of 1, if we have a binary with intel + visual studio, we are > effectively forcing everyone on windows to use intel fortran > compilers. I would rather have the official binaries using open source > toolchains. I don't want to force other people either to use Intel Fortran. But I still don't understand why I could not use gfortran and gcc for my program, that links against numpy (that uses let's say Intel Fortran internally). I don't have access to Intel Fortran on windows, but I am available to help with this issue. Ondrej From orion at cora.nwra.com Wed Aug 22 12:59:28 2012 From: orion at cora.nwra.com (Orion Poplawski) Date: Wed, 22 Aug 2012 10:59:28 -0600 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: <50350830.50909@cora.nwra.com> References: <5035010F.9090507@cora.nwra.com> <50350830.50909@cora.nwra.com> Message-ID: <50350FF0.7020408@cora.nwra.com> On 08/22/2012 10:26 AM, Orion Poplawski wrote: > On 08/22/2012 09:55 AM, Orion Poplawski wrote: >> On 08/21/2012 10:24 AM, Ond?ej ?ert?k wrote: >>> Hi, >>> >>> I'm pleased to announce the availability of the first beta release of >>> NumPy 1.7.0b1. >> >> Currently in trying to support python 3.3 in Fedora Rawhide (F19) and Fedora >> 18 we are doing: >> >> # Regenerate Cython c sources >> # This is needed with numpy-1.6.2.tar.gz with python 3.3 to avoid an exception >> # with an import call in the generated .c file in the tarball that uses the >> # old default of -1: >> # File "mtrand.pyx", line 126, in init mtrand >> (numpy/random/mtrand/mtrand.c:20679) >> # ValueError: level must be >= 0 >> # due to the changes in import in 3.3 >> # Regenerating with a newer Cython fixes it: >> pushd numpy/random/mtrand/ >> rm -v mtrand.c >> cython mtrand.pyx >> popd >> > > If I drop the cython generation it builds, but the python 3 test failure I get > now is: .. > ERROR: Ticket #16 > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/core/tests/test_regression.py", > line 41, in test_pickle_transposed > b = pickle.load(f) > EOFError > ====================================================================== > ERROR: Failure: ValueError (can't handle version 187 of numpy.ndarray pickle) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python3.3/site-packages/nose/failure.py", line 37, in runTest > raise self.exc_class(self.exc_val).with_traceback(self.tb) > File "/usr/lib/python3.3/site-packages/nose/loader.py", line 232, in generate > for test in g(): > File > "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/lib/tests/test_format.py", > line 429, in test_roundtrip > arr2 = roundtrip(arr) > File > "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/lib/tests/test_format.py", > line 420, in roundtrip > arr2 = format.read_array(f2) > File > "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/lib/format.py", > line 449, in read_array > array = pickle.load(fp) > ValueError: can't handle version 187 of numpy.ndarray pickle I should note that I'm taking numpy/core/src/multiarray/scalarapi.c and numpy/core/src/multiarray/scalartypes.c.src from git master, which I thought had the fix for this. -- Orion Poplawski Technical Manager 303-415-9701 x222 NWRA, Boulder Office FAX: 303-415-9702 3380 Mitchell Lane orion at nwra.com Boulder, CO 80301 http://www.nwra.com From cournape at gmail.com Wed Aug 22 13:55:55 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 22 Aug 2012 18:55:55 +0100 Subject: [Numpy-discussion] 64bit infrastructure In-Reply-To: References: <501415CA-C4CE-413A-828E-1D1472E03959@continuum.io> <69B063D5-16C0-43EC-9E0B-BC295545373C@continuum.io> Message-ID: On Wed, Aug 22, 2012 at 5:46 PM, Ond?ej ?ert?k wrote: > On Wed, Aug 22, 2012 at 8:50 AM, David Cournapeau wrote: >> On Wed, Aug 22, 2012 at 4:12 PM, Ond?ej ?ert?k wrote: >>> On Wed, Aug 22, 2012 at 7:35 AM, Travis Oliphant wrote: >>>> On Aug 22, 2012, at 9:28 AM, David Cournapeau wrote: >>>> >>>>> On Wed, Aug 22, 2012 at 3:25 PM, Travis Oliphant wrote: >>>>>> >>>>>> On Aug 22, 2012, at 3:59 AM, Ralf Gommers wrote: >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Aug 21, 2012 at 12:51 AM, Travis Oliphant >>>>>> wrote: >>>>>>> >>>>>>> I'm actually not sure, why. I think the issue is making sure that the >>>>>>> release manager can actually "build" NumPy without having to buy a >>>>>>> particular compiler. >>>>>> >>>>>> >>>>>> That would help, yes. MS Express doesn't work under Wine last time I checked >>>>>> by the way. >>>>>> >>>>>> However, the issue is more than just one license. There's a large number of >>>>>> packages that depend on numpy and provide binaries. If they can't make those >>>>>> compatible with numpy ones, that's a problem. Users will first install numpy >>>>>> 64-bit, and then later find out that part of the scientific Python stack >>>>>> isn't available to them anymore. >>>>>> >>>>>> >>>>>> >>>>>> As far as I understand, you don't *have* to build all downstream >>>>>> dependencies with the same compiler that NumPy was built with unless your >>>>>> extension relies on the way C-functions pass structures on the stack (not >>>>>> pointers to them, but structures as a whole) or if it relies on the >>>>>> representation of FILE*. At one time all structures were passed as >>>>>> pointers specifically for this reason. The FILE* situation is a problem, >>>>>> but most extensions don't use NumPy C-API calls that have a FILE* argument. >>>>> >>>>> It is much more pervasive than that, unfortunately. And for fortran, >>>>> it is much worse, because if we build scipy or numpy with Intel >>>>> Fortran, I think we pretty much force everyone to use intel fortran >>>>> for *any* binary on top of them. >>>> >>>> Can you be more specific? Does the calling convention for C-routines created with Intel Fortran differ so much? >>> >>> >>> I have the same question as Travis. If you are interested about ABI >>> for Fortran, I have created this FAQ: >>> >>> http://fortran90.org/src/faq.html#are-fortran-compilers-abi-compatible >>> >>> Since NumPy only calls the Fortran routines, but does not expose them, >>> then the only issue is how to build NumPy with (let's say) Intel >>> Fortran. That's a separate issue. >>> Once NumPy is built, then nobody cares, because they only need to >>> interface the C routines, if anything at all. >>> >>> As far as Fortran runtime library goes (which of course is different >>> for Intel and gfortran), I am currently not sure whether it is >>> possible to mix them, but I think you probably can, if numpy .so is >>> using Intel, and my own .so is using gfortran. >>> >>> >>> Finally, if NumPy is build using MSVC, does this force everybody to >>> use the same C compiler? I thought that C compilers are ABI >>> compatible, at least Intel C and gfortran C are ABI compatible. Is >>> MSVC different? >>> >>> Btw, to correctly call Fortran from C, one should always be using the >>> iso_c_binding module, as explained here: >>> >>> http://fortran90.org/src/best-practices.html#interfacing-with-c >>> >>> Then the Fortran code becomes just like any other C library. >> >> It is unfortunately more complicated than that. >> >> 1 regarding fortran runtimes: I have never been able to link a >> gfortran object file with Visual Studio linker (link.exe). > > You cannot mix the Fortran object .o files between compilers. > That will never work, because Fortran compilers are not ABI > compatible, see the FAQ. I mean linking .o from the *same* compiler into a library through the MS linker (link.exe instead of gcc one). That's how it works in numpy/scipy so far, where we can link g77-produced .o files with the MS compiler. Most platforms have one linker, e.g. I think intel fortran compiler on linux uses ld underneath. > > The only way this could work is if you can mix gcc .o file with MSVC > linker (I don't > know if this is possible or not). If that works, > then you should be able to use iso_c_binding in Fortran to produce > gcc's compatible .o > file and then link it. The problem is NOT communicating between C and fortran. I could produce a simple fortran dll used inside a MSVC program, as long as this fortran dll did not use anything from the fortran runtime. See http://cournape.wordpress.com/2009/03/09/gfortran-visual-studio/ for an actual example. The problem is when you need the fortran runtime (which you do for scipy). > > Which errors were you getting when linking it? Was it a problem with > libgfortran runtime > (that you of course need to link as well)? This runtime can be a problem. yes, the problem is gfortran runtime. libgfortran expects some mingw32 stuff, that cannot be linked into something through MS linker. IIRC, you could use enough violence to make it link, but you would get nasty runtime segfaults before anything gets run. > >> 2 mixing runtimes is never a good idea, because it becomes difficult >> to avoid passing a pointer from one runtime to the other. Intel >> fortran compiler obviously knows how to deal with the C runtime of >> Visual Studio, but gfortran doesn't. > > By Fortran runtime I mean this library in my Ubuntu: > > libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 > (0x00007ffaa9ce9000) > > Here > are all the libraries in my typical gfortran program: > > linux-vdso.so.1 => (0x00007ffffe9ff000) > libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 > (0x00007ffaa9ce9000) > libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007ffaa99ef000) > libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007ffaa97d8000) > libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 > (0x00007ffaa95a2000) > libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffaa91e5000) > /lib64/ld-linux-x86-64.so.2 (0x00007ffaaa022000) > > What exactly is the C runtime? libc? It is kind of the equivalent of the C runtime, except it contains more stuff than just the C std library. > I don't want to force other people either to use Intel Fortran. > > But I still don't understand why I could not use gfortran and gcc for > my program, > that links against numpy (that uses let's say Intel Fortran internally). > > I don't have access to Intel Fortran on windows, but I am available to > help with this issue. I don't see many solutions to this problem: - one is rebuilding libgfortran with visual studio to use the MS C runtime instead of mingw. That sounds painful - removing any dependency of the fortran runtime in scipy. David From ronan.lamy at gmail.com Wed Aug 22 14:36:49 2012 From: ronan.lamy at gmail.com (Ronan Lamy) Date: Wed, 22 Aug 2012 19:36:49 +0100 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: <50350FF0.7020408@cora.nwra.com> References: <5035010F.9090507@cora.nwra.com> <50350830.50909@cora.nwra.com> <50350FF0.7020408@cora.nwra.com> Message-ID: <1345660609.13868.1.camel@ronan-desktop> Le mercredi 22 ao?t 2012 ? 10:59 -0600, Orion Poplawski a ?crit : > On 08/22/2012 10:26 AM, Orion Poplawski wrote: > > On 08/22/2012 09:55 AM, Orion Poplawski wrote: > >> On 08/21/2012 10:24 AM, Ond?ej ?ert?k wrote: > >>> Hi, > >>> > >>> I'm pleased to announce the availability of the first beta release of > >>> NumPy 1.7.0b1. > >> > >> Currently in trying to support python 3.3 in Fedora Rawhide (F19) and Fedora > >> 18 we are doing: > >> > >> # Regenerate Cython c sources > >> # This is needed with numpy-1.6.2.tar.gz with python 3.3 to avoid an exception > >> # with an import call in the generated .c file in the tarball that uses the > >> # old default of -1: > >> # File "mtrand.pyx", line 126, in init mtrand > >> (numpy/random/mtrand/mtrand.c:20679) > >> # ValueError: level must be >= 0 > >> # due to the changes in import in 3.3 > >> # Regenerating with a newer Cython fixes it: > >> pushd numpy/random/mtrand/ > >> rm -v mtrand.c > >> cython mtrand.pyx > >> popd > >> > > > > If I drop the cython generation it builds, but the python 3 test failure I get > > now is: > .. > > ERROR: Ticket #16 > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > > "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/core/tests/test_regression.py", > > line 41, in test_pickle_transposed > > b = pickle.load(f) > > EOFError > > ====================================================================== > > ERROR: Failure: ValueError (can't handle version 187 of numpy.ndarray pickle) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/usr/lib/python3.3/site-packages/nose/failure.py", line 37, in runTest > > raise self.exc_class(self.exc_val).with_traceback(self.tb) > > File "/usr/lib/python3.3/site-packages/nose/loader.py", line 232, in generate > > for test in g(): > > File > > "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/lib/tests/test_format.py", > > line 429, in test_roundtrip > > arr2 = roundtrip(arr) > > File > > "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/lib/tests/test_format.py", > > line 420, in roundtrip > > arr2 = format.read_array(f2) > > File > > "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/lib/format.py", > > line 449, in read_array > > array = pickle.load(fp) > > ValueError: can't handle version 187 of numpy.ndarray pickle > > > I should note that I'm taking numpy/core/src/multiarray/scalarapi.c and > numpy/core/src/multiarray/scalartypes.c.src from git master, which I thought > had the fix for this. Note that the fix is actually in numpy/core/src/multiarray/methods.c, see https://github.com/numpy/numpy/pull/371/ From amit.nagal at gvkbio.com Wed Aug 22 23:58:21 2012 From: amit.nagal at gvkbio.com (Amit Nagal) Date: Thu, 23 Aug 2012 03:58:21 +0000 Subject: [Numpy-discussion] unsubscribe Message-ID: Please unsubscribe me Visit us at Booth No. 5 at 2012 ChemOutsourcing Conference, 10-13 Sept 2012, Ocean Place Resort - Long Branch, NJ, United States Visit us at Booth No. 4 at World Conference on Pharmacometrics, 5-7 Sept 2012, Grand Hilton Hotel, Seoul, Korea ________________________________ Notice: The information contained in this electronic mail message is intended only for the use of the designated recipient. This message is privileged and confidential. and the property of GVK BIO or its affiliates and subsidiaries. If the reader of this message is not the intended recipient or an agent responsible for delivering it to the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify us immediately by telephone +91-40-66929999 and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). -------------- next part -------------- An HTML attachment was scrubbed... URL: From orion at cora.nwra.com Thu Aug 23 00:24:04 2012 From: orion at cora.nwra.com (Orion Poplawski) Date: Wed, 22 Aug 2012 22:24:04 -0600 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: <1345660609.13868.1.camel@ronan-desktop> References: <5035010F.9090507@cora.nwra.com> <50350830.50909@cora.nwra.com> <50350FF0.7020408@cora.nwra.com> <1345660609.13868.1.camel@ronan-desktop> Message-ID: <5035B064.7030206@cora.nwra.com> On 08/22/2012 12:36 PM, Ronan Lamy wrote: > Le mercredi 22 ao?t 2012 ? 10:59 -0600, Orion Poplawski a ?crit : >>> >>> If I drop the cython generation it builds, but the python 3 test failure I get >>> now is: >> .. >>> ERROR: Ticket #16 >>> ---------------------------------------------------------------------- >>> Traceback (most recent call last): >>> File >>> "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/core/tests/test_regression.py", >>> line 41, in test_pickle_transposed >>> b = pickle.load(f) >>> EOFError >>> ====================================================================== >>> ERROR: Failure: ValueError (can't handle version 187 of numpy.ndarray pickle) >>> ---------------------------------------------------------------------- >>> Traceback (most recent call last): >>> File "/usr/lib/python3.3/site-packages/nose/failure.py", line 37, in runTest >>> raise self.exc_class(self.exc_val).with_traceback(self.tb) >>> File "/usr/lib/python3.3/site-packages/nose/loader.py", line 232, in generate >>> for test in g(): >>> File >>> "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/lib/tests/test_format.py", >>> line 429, in test_roundtrip >>> arr2 = roundtrip(arr) >>> File >>> "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/lib/tests/test_format.py", >>> line 420, in roundtrip >>> arr2 = format.read_array(f2) >>> File >>> "/builddir/build/BUILDROOT/numpy-1.7.0-0.2.b1.fc19.x86_64/usr/lib64/python3.3/site-packages/numpy/lib/format.py", >>> line 449, in read_array >>> array = pickle.load(fp) >>> ValueError: can't handle version 187 of numpy.ndarray pickle >> >> >> I should note that I'm taking numpy/core/src/multiarray/scalarapi.c and >> numpy/core/src/multiarray/scalartypes.c.src from git master, which I thought >> had the fix for this. > > Note that the fix is actually in numpy/core/src/multiarray/methods.c, > see https://github.com/numpy/numpy/pull/371/ Thanks! -- Orion Poplawski Technical Manager 303-415-9701 x222 NWRA/CoRA Division FAX: 303-415-9702 3380 Mitchell Lane orion at cora.nwra.com Boulder, CO 80301 http://www.cora.nwra.com From scott.sinclair.za at gmail.com Thu Aug 23 02:28:15 2012 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Thu, 23 Aug 2012 08:28:15 +0200 Subject: [Numpy-discussion] unsubscribe In-Reply-To: References: Message-ID: On 23 August 2012 05:58, Amit Nagal wrote: > Please unsubscribe me You can unsubscribe at the bottom of this page http://mail.scipy.org/mailman/listinfo/numpy-discussion Cheers, Scott From ben.root at ou.edu Thu Aug 23 10:12:05 2012 From: ben.root at ou.edu (Benjamin Root) Date: Thu, 23 Aug 2012 10:12:05 -0400 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: References: Message-ID: On Tue, Aug 21, 2012 at 12:24 PM, Ond?ej ?ert?k wrote: > Hi, > > I'm pleased to announce the availability of the first beta release of > NumPy 1.7.0b1. > > Sources and binary installers can be found at > https://sourceforge.net/projects/numpy/files/NumPy/1.7.0b1/ > > Please test this release and report any issues on the numpy-discussion > mailing list. The following problems are known and > we'll work on fixing them before the final release: > > http://projects.scipy.org/numpy/ticket/2187 > http://projects.scipy.org/numpy/ticket/2185 > http://projects.scipy.org/numpy/ticket/2066 > http://projects.scipy.org/numpy/ticket/1588 > http://projects.scipy.org/numpy/ticket/2076 > http://projects.scipy.org/numpy/ticket/2101 > http://projects.scipy.org/numpy/ticket/2108 > http://projects.scipy.org/numpy/ticket/2150 > http://projects.scipy.org/numpy/ticket/2189 > > I would like to thank Ralf for a lot of help with creating binaries > and other help for this release. > > Cheers, > Ondrej > > > At http://docs.scipy.org/doc/numpy/contents.html, it looks like the TOC tree is a bit messed up. For example, I see that masked arrays are listed multiple times, and I think some of the sub-entries for masked arrays show up multiple times within an entry for masked arrays. Some of the bullets appear as ">" instead of dots. Don't know what version that page is generated from, but we might want to double-check that 1.7.0's docs don't have the same problem. Cheers! Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Aug 23 14:26:49 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 23 Aug 2012 11:26:49 -0700 Subject: [Numpy-discussion] [ANN] Call for abstracts: BigData minisymposium at CSE'13, February 2013, Boston Message-ID: Dear colleagues, next year's SIAM conference on Computational Science and Engineering, CSE'13, will take place in Boston, February 25-March 1 (http://www.siam.org/meetings/cse13), and for this version there will be a track focused on the topic of Big Data. This term has rapidly risen in recent discussions of science and even of mainstream business computing, and for good reasons. Today virtually all disciplines are facing a flood of quantitative information whose volumes have often grown faster than the quality of our tools for extracting insight from these data. SIAM hopes that CSE'13 will provide an excellent venue for discussing these problems, from the vantage point offered by a community whose expertise combines analytical insights, algorithmic development, software engineering and domain-specific applications. As part of this event, Titus Brown (http://ged.msu.edu) and I are organizing a minisymposium where we would like to have a group of presentations that address both novel algorithmic ideas and computational approaches as well as domain-specific problems. Data doesn't appear in a vacuum, and data from different domains presents a mix of common problems along with questions that may be specific to each; we hope that by engaging a dialog between those working on algorithmic and implementation questions and those with specific problems from the field, valuable insights can be obtained. If you would like to contribute to this minisymposium, please contact us directly at: "C. Titus Brown" , "Fernando Perez" with your name and affiliation, the title of your proposed talk and a brief description (actual abstracts are due later so an informal description will suffice for now), by Wednesday August 29. For more details on the submission process, see: http://www.siam.org/meetings/cse13/submissions.php Please forward this to any interested colleagues. Regards, Titus and Fernando. From alex.flint at gmail.com Sun Aug 26 11:04:51 2012 From: alex.flint at gmail.com (Alex Flint) Date: Sun, 26 Aug 2012 11:04:51 -0400 Subject: [Numpy-discussion] vectorized multi-matrix multiplication Message-ID: I have two lists of 3x3 arrays and I would like to compute the matrix product of the i-th element in the first list with the i-th element in the second list. Of course, I could just loop over the lists: for i in range(n): out[i] = dot( matrices1[i], matrices2[i] ) However, the list is quite long, and each matrix is very small (3x3) so this turns out to be quite slow. Is there a way to do this with a single numpy call? I have looked at tensordot but it outputs an N x N x 3 x 3 array, whereas I want an N x 3 x 3 output. I've also looked at various broadcasting tricks but I haven't found anything that works yet. Alex From jtaylor at cs.toronto.edu Sun Aug 26 12:49:29 2012 From: jtaylor at cs.toronto.edu (Jonathan Taylor) Date: Sun, 26 Aug 2012 12:49:29 -0400 Subject: [Numpy-discussion] vectorized multi-matrix multiplication In-Reply-To: References: Message-ID: Assuming matrices1 and matrices2 are actually arrays of size (N, 3, 3) you can do: np.einsum('nij,njk->nik', matrices1, matrices2) On Sun, Aug 26, 2012 at 11:04 AM, Alex Flint wrote: > I have two lists of 3x3 arrays and I would like to compute the matrix > product of the i-th element in the first list with the i-th element in > the second list. Of course, I could just loop over the lists: > > for i in range(n): > out[i] = dot( matrices1[i], matrices2[i] ) > > However, the list is quite long, and each matrix is very small (3x3) > so this turns out to be quite slow. Is there a way to do this with a > single numpy call? I have looked at tensordot but it outputs an N x N > x 3 x 3 array, whereas I want an N x 3 x 3 output. I've also looked at > various broadcasting tricks but I haven't found anything that works > yet. > > Alex > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From alex.flint at gmail.com Sun Aug 26 14:00:00 2012 From: alex.flint at gmail.com (Alex Flint) Date: Sun, 26 Aug 2012 14:00:00 -0400 Subject: [Numpy-discussion] vectorized multi-matrix multiplication In-Reply-To: References: Message-ID: Thank you. On Sun, Aug 26, 2012 at 11:04 AM, Alex Flint wrote: > I have two lists of 3x3 arrays and I would like to compute the matrix > product of the i-th element in the first list with the i-th element in > the second list. Of course, I could just loop over the lists: > > for i in range(n): > out[i] = dot( matrices1[i], matrices2[i] ) > > However, the list is quite long, and each matrix is very small (3x3) > so this turns out to be quite slow. Is there a way to do this with a > single numpy call? I have looked at tensordot but it outputs an N x N > x 3 x 3 array, whereas I want an N x 3 x 3 output. I've also looked at > various broadcasting tricks but I haven't found anything that works > yet. > > Alex From toddb at nvr.com Sun Aug 26 19:36:08 2012 From: toddb at nvr.com (Todd Brunhoff) Date: Sun, 26 Aug 2012 16:36:08 -0700 Subject: [Numpy-discussion] installing numpy in jython (in Netbeans) Message-ID: <503AB2E8.3050502@nvr.com> Being a newbie to this list, I recognize that the answer to this might be "why would you do that?". But surely it can't be worse than that. Briefly put, I want to install numpy and scipy in jython (reasons below). Running 'cd ; jython setup.py install' runs into errors. But installing he same distro under python works fine. So the underlying question is: is there a way to install numpy under jython? Here's how I got here. The reason I would like this under jython is wrapped up in Netbeans, Octave, Matlab and Windows 7. I've been using Netbeans under linux for years for c++ and a bit of Java, and the editing environment with cross referencing and syntax-directed editing is quite good. Netbeans only presents python debugging via jython, which makes sense. Most of the work I am doing is with matrix algebra and I started with Octave, but while Octave is excellent for operating on matrices, it is not good for file format manipulations, hence for some operations I'd like to turn to python, and if I edit under Netbeans, the debugging requires that I install numpy under jython. At work I use a machine running fedora 16 .... but... I travel a bit and my travel machine is a laptop running windows 7. Therefore, Windows 7 + Netbeans + numpy + debugging ==> jython + numpy + scipy. Here's the install problems, which occur under numpy-1.7.0b1 and 1.6.2. The first install error is in numpy\distutils\exec_command.py, line 585, where it throws an exception because the java exec tests are unlimited. So I comment out those two lines, and rerun the setup.py. The next errors are notes about features not being available because other packages are also unavailable. I don't think this is really getting in the way, although I could be wrong. The libraries mentioned as missing are: * libraries mkl,vml,guide * libraries ptf77blas,ptcblas,atlas * libraries lapack_atlas So setup gets as far as this: running install running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building py_modules sources creating build creating build\src.java1.6.0_33-2.5 creating build\src.java1.6.0_33-2.5\numpy creating build\src.java1.6.0_33-2.5\numpy\distutils building library "npymath" sources No module named jythoncompiler in numpy.distutils; trying from distutils customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 don't know how to compile Fortran code on platform 'java' Some of these messages make it look as if the config wants to use the cygwin gnu compilers, which are ok, but under windows are not nearly as good as mingw gnu compiler or better yet, the visual studio 2010 compiler. I have both, but I don't see a way to steer the numpy setup to use them. The next error is fatal File "...\numpy-1.7.0b1\numpy\distutils\ccompiler.py", line 111, in CCompiler_object_filenames if ext not in self.src_extensions: TypeError: 'NoneType' object is not iterable This one looks as if it does not know what .o or .a or .obj files are. Fixing this one looks like hours of digging through the code. Is there a simpler solution? Thanks in advance, Todd -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Sun Aug 26 20:55:39 2012 From: chris.barker at noaa.gov (Chris Barker) Date: Sun, 26 Aug 2012 17:55:39 -0700 Subject: [Numpy-discussion] installing numpy in jython (in Netbeans) In-Reply-To: <503AB2E8.3050502@nvr.com> References: <503AB2E8.3050502@nvr.com> Message-ID: Todd, The short version is: you can't do that. -- Jython uses the JVM, numpy is very, very tied into the CPython runtime. This thread is a bit old, but think still holds: http://stackoverflow.com/questions/3097466/using-numpy-and-cpython-with-jython There is the junumeric project, but it doesn't look very active: https://bitbucket.org/zornslemon/jnumeric-ra/downloads/ According to the Jython FAQ (http://wiki.python.org/jython/JythonFaq/GeneralInfo): "For the next release of Jython, we plan to support the C Python Extension API" -- not sure what the timing is on that, but in theory, you could then use numpy ,but numpy is a pretty complex beast -- I wouldn't recommend being the first to try it! So: other options: 1) A couple years ago the NetBeans team was talking about improving the support for Python (CPython) -- you might want to see if there is a plugin or something that you could add. Maybe this: http://wiki.netbeans.org/Python 2) Python programming can be pretty darn productive with "print" debugging -- a lot of us never use a "proper" debugger -- a good programmers editor and a console is all you need. 3) There are a lot of good IDEs for CPython -- and stad-alone debuggers (WinPdb, for example), so you don't need to be married to the NetBeans environment. In short: It'll probably be a lot easier to find a different programming (or debugging anyway) )environment than get numpy to work work with Jython. -Chris On Sun, Aug 26, 2012 at 4:36 PM, Todd Brunhoff wrote: > Being a newbie to this list, I recognize that the answer to this might be > "why would you do that?". But surely it can't be worse than that. > > Briefly put, I want to install numpy and scipy in jython (reasons below). > Running 'cd ; jython setup.py install' runs into errors. But > installing he same distro under python works fine. So the underlying > question is: is there a way to install numpy under jython? > > Here's how I got here. The reason I would like this under jython is wrapped > up in Netbeans, Octave, Matlab and Windows 7. I've been using Netbeans under > linux for years for c++ and a bit of Java, and the editing environment with > cross referencing and syntax-directed editing is quite good. Netbeans only > presents python debugging via jython, which makes sense. Most of the work I > am doing is with matrix algebra and I started with Octave, but while Octave > is excellent for operating on matrices, it is not good for file format > manipulations, hence for some operations I'd like to turn to python, and if > I edit under Netbeans, the debugging requires that I install numpy under > jython. At work I use a machine running fedora 16 .... but... I travel a bit > and my travel machine is a laptop running windows 7. Therefore, Windows 7 + > Netbeans + numpy + debugging ==> jython + numpy + scipy. > > Here's the install problems, which occur under numpy-1.7.0b1 and 1.6.2. The > first install error is in numpy\distutils\exec_command.py, line 585, where > it throws an exception because the java exec tests are unlimited. So I > comment out those two lines, and rerun the setup.py. > > The next errors are notes about features not being available because other > packages are also unavailable. I don't think this is really getting in the > way, although I could be wrong. The libraries mentioned as missing are: > > libraries mkl,vml,guide > libraries ptf77blas,ptcblas,atlas > libraries lapack_atlas > > So setup gets as far as this: > > running install > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build commands --compiler > options > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands --fcompiler > options > running build_src > build_src > building py_modules sources > creating build > creating build\src.java1.6.0_33-2.5 > creating build\src.java1.6.0_33-2.5\numpy > creating build\src.java1.6.0_33-2.5\numpy\distutils > building library "npymath" sources > No module named jythoncompiler in numpy.distutils; trying from distutils > customize GnuFCompiler > Could not locate executable g77 > Could not locate executable f77 > don't know how to compile Fortran code on platform 'java' > > Some of these messages make it look as if the config wants to use the cygwin > gnu compilers, which are ok, but under windows are not nearly as good as > mingw gnu compiler or better yet, the visual studio 2010 compiler. I have > both, but I don't see a way to steer the numpy setup to use them. > > The next error is fatal > > File "...\numpy-1.7.0b1\numpy\distutils\ccompiler.py", line 111, in > CCompiler_object_filenames > if ext not in self.src_extensions: > TypeError: 'NoneType' object is not iterable > > This one looks as if it does not know what .o or .a or .obj files are. > Fixing this one looks like hours of digging through the code. Is there a > simpler solution? > > Thanks in advance, > > Todd > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From toddb at nvr.com Sun Aug 26 23:53:28 2012 From: toddb at nvr.com (Todd Brunhoff) Date: Sun, 26 Aug 2012 20:53:28 -0700 Subject: [Numpy-discussion] installing numpy in jython (in Netbeans) In-Reply-To: References: <503AB2E8.3050502@nvr.com> Message-ID: <503AEF38.1010502@nvr.com> Chris, I appreciate the pointers, which appear to confirm that numpy and jython are a ways out. I can see where the c-api support in jython would be required by numpy's implementation. > 1) [NetBeans support for CPython] -- Maybe this: > http://wiki.netbeans.org/Python That seems outdated. I had used http://wiki.netbeans.org/PythonInstall which points at a plugin repository which does pretty good install. But of course it is jython, not python. And the Hudson (Jenkins) build was last done Oct 22, 2010. And interestingly enough, that info points back to the link you found. Oh well. > 2) Python programming can be pretty darn productive with "print" > debugging Works for me. And sometimes there *is* no other way. > 3) There are a lot of good IDEs for CPython -- and stad-alone > debuggers (WinPdb, for example), so you don't need to be married to > the NetBeans environment. winpdb is ok, although it is only a graphic debugger, not an ide, emphasis on the 'd'. As it is, since NB cannot debug python that requires numpy, it means I must rely on vi & Netbeans for editing and winpdb for debugging. I don't think about the ide's as being married to them, but I do like to use the best tools available. NB could be, but is not for python. Thanks again for helping to shorten my search for tools. Todd On 8/26/2012 5:55 PM, Chris Barker wrote: > Todd, > > The short version is: you can't do that. -- Jython uses the JVM, numpy > is very, very tied into the CPython runtime. > > This thread is a bit old, but think still holds: > > http://stackoverflow.com/questions/3097466/using-numpy-and-cpython-with-jython > > There is the junumeric project, but it doesn't look very active: > > https://bitbucket.org/zornslemon/jnumeric-ra/downloads/ > > According to the Jython FAQ > (http://wiki.python.org/jython/JythonFaq/GeneralInfo): > "For the next release of Jython, we plan to support the C Python > Extension API" -- not sure what the timing is on that, but in theory, > you could then use numpy ,but numpy is a pretty complex beast -- I > wouldn't recommend being the first to try it! > > So: other options: > > 1) A couple years ago the NetBeans team was talking about improving > the support for Python (CPython) -- you might want to see if there is > a plugin or something that you could add. Maybe this: > http://wiki.netbeans.org/Python > > 2) Python programming can be pretty darn productive with "print" > debugging -- a lot of us never use a "proper" debugger -- a good > programmers editor and a console is all you need. > > 3) There are a lot of good IDEs for CPython -- and stad-alone > debuggers (WinPdb, for example), so you don't need to be married to > the NetBeans environment. > > In short: It'll probably be a lot easier to find a different > programming (or debugging anyway) )environment than get numpy to work > work with Jython. > > -Chris > > > > > > On Sun, Aug 26, 2012 at 4:36 PM, Todd Brunhoff wrote: >> Being a newbie to this list, I recognize that the answer to this might be >> "why would you do that?". But surely it can't be worse than that. >> >> Briefly put, I want to install numpy and scipy in jython (reasons below). >> Running 'cd; jython setup.py install' runs into errors. But >> installing he same distro under python works fine. So the underlying >> question is: is there a way to install numpy under jython? >> >> Here's how I got here. The reason I would like this under jython is wrapped >> up in Netbeans, Octave, Matlab and Windows 7. I've been using Netbeans under >> linux for years for c++ and a bit of Java, and the editing environment with >> cross referencing and syntax-directed editing is quite good. Netbeans only >> presents python debugging via jython, which makes sense. Most of the work I >> am doing is with matrix algebra and I started with Octave, but while Octave >> is excellent for operating on matrices, it is not good for file format >> manipulations, hence for some operations I'd like to turn to python, and if >> I edit under Netbeans, the debugging requires that I install numpy under >> jython. At work I use a machine running fedora 16 .... but... I travel a bit >> and my travel machine is a laptop running windows 7. Therefore, Windows 7 + >> Netbeans + numpy + debugging ==> jython + numpy + scipy. >> >> Here's the install problems, which occur under numpy-1.7.0b1 and 1.6.2. The >> first install error is in numpy\distutils\exec_command.py, line 585, where >> it throws an exception because the java exec tests are unlimited. So I >> comment out those two lines, and rerun the setup.py. >> >> The next errors are notes about features not being available because other >> packages are also unavailable. I don't think this is really getting in the >> way, although I could be wrong. The libraries mentioned as missing are: >> >> libraries mkl,vml,guide >> libraries ptf77blas,ptcblas,atlas >> libraries lapack_atlas >> >> So setup gets as far as this: >> >> running install >> running build >> running config_cc >> unifing config_cc, config, build_clib, build_ext, build commands --compiler >> options >> running config_fc >> unifing config_fc, config, build_clib, build_ext, build commands --fcompiler >> options >> running build_src >> build_src >> building py_modules sources >> creating build >> creating build\src.java1.6.0_33-2.5 >> creating build\src.java1.6.0_33-2.5\numpy >> creating build\src.java1.6.0_33-2.5\numpy\distutils >> building library "npymath" sources >> No module named jythoncompiler in numpy.distutils; trying from distutils >> customize GnuFCompiler >> Could not locate executable g77 >> Could not locate executable f77 >> don't know how to compile Fortran code on platform 'java' >> >> Some of these messages make it look as if the config wants to use the cygwin >> gnu compilers, which are ok, but under windows are not nearly as good as >> mingw gnu compiler or better yet, the visual studio 2010 compiler. I have >> both, but I don't see a way to steer the numpy setup to use them. >> >> The next error is fatal >> >> File "...\numpy-1.7.0b1\numpy\distutils\ccompiler.py", line 111, in >> CCompiler_object_filenames >> if ext not in self.src_extensions: >> TypeError: 'NoneType' object is not iterable >> >> This one looks as if it does not know what .o or .a or .obj files are. >> Fixing this one looks like hours of digging through the code. Is there a >> simpler solution? >> >> Thanks in advance, >> >> Todd >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > From d.s.seljebotn at astro.uio.no Mon Aug 27 07:56:02 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Mon, 27 Aug 2012 13:56:02 +0200 Subject: [Numpy-discussion] Mark Florisson's GSoC, minivect, and NumPy Message-ID: <503B6052.4070805@astro.uio.no> I just wanted to draw the attention of NumPy devs to Mark Florisson's GSoC work. It is 'minivect', a tool to use for compiling array expressions (think (as a concept) a shared backend between Cython, Theano, numba, though it's only used in Cython currently). His M. Sc. thesis, "Techniques for Static and Dynamic Compilation of Array Expressions", is up here: https://github.com/markflorisson88/minivect/tree/master/thesis As you can see he even beats Intel Fortran for some array layouts, and in general have comparable performance with it. The benchmarks are mostly for two-operand operations, i.e. operations where NumPy semantics would be OK. IMO, if anybody ever wants to revamp NumPy's computation abilities and get that 2-3x speedup (e.g., make it multi-threaded), this is a very good place to start. Dag From chris.barker at noaa.gov Mon Aug 27 12:51:24 2012 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 27 Aug 2012 09:51:24 -0700 Subject: [Numpy-discussion] installing numpy in jython (in Netbeans) In-Reply-To: <503AEF38.1010502@nvr.com> References: <503AB2E8.3050502@nvr.com> <503AEF38.1010502@nvr.com> Message-ID: On Sun, Aug 26, 2012 at 8:53 PM, Todd Brunhoff wrote: > Chris, > winpdb is ok, although it is only a graphic debugger, not an ide, emphasis > on the 'd'. yup -- I mentioned, that as you seem to like NB -- and I know I try to use the same editor for eveything. But if you want a nice full-on IDE for Python, there are a lot of them. I"m an editor_termal guy, so I can't make a recommendation, but some of the biggies are: Eclipse+PyDev PyCharm WindIDE Spyder (particularly nice for numpy/ matplotlib, etc) -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From logik at centrum.cz Wed Aug 29 06:20:19 2012 From: logik at centrum.cz (=?ISO-8859-2?Q?Maty=E1=B9_Nov=E1k?=) Date: Wed, 29 Aug 2012 12:20:19 +0200 Subject: [Numpy-discussion] distutils: compiler used by add_library Message-ID: <503DECE3.70204@centrum.cz> Hi, I wrote extension some numerical extension for python, that requires compiling and linking additional fortran sources. I find out, that these libraries can be easily added using config.add_library() function, but there is a problem. The --fcompiler command doesn't propagate to the stage where the libraries are compiled, so the default (and in my case wrong) compiler is used. (If I try compile the files using add_extension method, they are compiled by desired compiler, but I need combine more sources in one extension so I think that I can't use add_extension). Is there any way how to force python to use the right compiler, or at least hardcode the compiler in the setup.py? Thanks a lot for suggestions, Matyas From ndbecker2 at gmail.com Wed Aug 29 08:59:06 2012 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 29 Aug 2012 08:59:06 -0400 Subject: [Numpy-discussion] blaze lib announcement Message-ID: This looks interesting: http://code.google.com/p/blaze-lib/ From aron at ahmadia.net Wed Aug 29 09:06:59 2012 From: aron at ahmadia.net (Aron Ahmadia) Date: Wed, 29 Aug 2012 14:06:59 +0100 Subject: [Numpy-discussion] blaze lib announcement In-Reply-To: References: Message-ID: The Eigen3 project (http://eigen.tuxfamily.org/index.php?title=Main_Page) is more mature, but it's good to see a new contender in the field. A On Wed, Aug 29, 2012 at 1:59 PM, Neal Becker wrote: > This looks interesting: > > http://code.google.com/p/blaze-lib/ > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Aug 29 12:08:40 2012 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 29 Aug 2012 09:08:40 -0700 Subject: [Numpy-discussion] config.add_library() Message-ID: Hi folks, I'm working on a package that will contain a bunch of cython extensions, all of which need to link against a pile of C++ code. What I think I need to do is build that C++ as a dynamic library, so I can link everything against it. It would be nice if I could leverage distutils to build that library for me. Particularly since Windows and OS-X and Linux all seem to need things a little (or q lot) different. It looks like numpy.distuitls' config.add_library might be what I need, but the docs are pretty sprarse: This page: http://docs.scipy.org/doc/numpy-1.5.x/reference/generated/numpy.distutils.misc_util.Configuration.add_library.html confused me a bit: """ Parameters : name : str Name of the extension. """ Is this really building an extension, or is that some docs string brought over from the distutils Extension class? So -- is config.add_lirary() what I'm looking for? If so are there better docs and/or examples you canpoint me to? Thanks, -Chris Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From michael.lehn at uni-ulm.de Wed Aug 29 14:34:41 2012 From: michael.lehn at uni-ulm.de (Michael Lehn) Date: Wed, 29 Aug 2012 20:34:41 +0200 Subject: [Numpy-discussion] blaze lib announcement In-Reply-To: References: Message-ID: <8565E798-4B48-4B90-B77C-7E4D2BFD2FC6@uni-ulm.de> > This looks interesting: > > http://code.google.com/p/blaze-lib/ So maybe you also want to have a look at http://flens.sf.net Just to promote my own baby in this context too ;-) Cheers, Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej.certik at gmail.com Wed Aug 29 18:52:57 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Wed, 29 Aug 2012 15:52:57 -0700 Subject: [Numpy-discussion] distutils: compiler used by add_library In-Reply-To: <503DECE3.70204@centrum.cz> References: <503DECE3.70204@centrum.cz> Message-ID: Hi Maty??, On Wed, Aug 29, 2012 at 3:20 AM, Maty?? Nov?k wrote: > Hi, > I wrote extension some numerical extension for python, that requires > compiling > and linking additional fortran sources. I find out, that these libraries > can be easily added > using config.add_library() function, but there is a problem. > The --fcompiler command doesn't propagate to the stage where the > libraries are compiled, > so the default (and in my case wrong) compiler is used. (If I try > compile the files using > add_extension method, they are compiled by desired compiler, but I need > combine more > sources in one extension so I think that I can't use add_extension). > Is there any way how to force python to use the right compiler, or at > least hardcode the > compiler in the setup.py? Maybe somebody can help with your particular question, but I was also struggling with similar issues when mixing Fortran, C and Python and settled on using cmake for compiling and linking Fortran, C and Python extension .so modules, as well as installing the Python .py files. Ondrej From ondrej.certik at gmail.com Wed Aug 29 20:19:18 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Wed, 29 Aug 2012 17:19:18 -0700 Subject: [Numpy-discussion] view of recarray issue In-Reply-To: References: Message-ID: Jay, On Mon, Aug 20, 2012 at 12:40 PM, Ond?ej ?ert?k wrote: > On Wed, Jul 25, 2012 at 10:29 AM, Jay Bourque wrote: >> I'm actively looking at this issue since it was my pull request that broke >> this (https://github.com/numpy/numpy/pull/350). We definitely don't want to >> break this functionality for 1.7. The problem is that even though indexing >> with a subset of fields still returns a copy (for now), it now returns a >> copy of a view of the original array. When you call copy() on a view, it >> copies the entire original structured array with the view dtype. A short >> term fix would be to "manually" create a proper copy to return similar to >> what _index_fields() did before my change, but since the idea is to >> eventually return the view instead of a copy, long term we need a way to do >> a proper copy of a structured array view that doesn't copy the unwanted >> fields. > > This should be fixed for 1.7.0. However, I am going to release beta now, > and then see what we can do about this. What would be the best "short term" fix, so that we can release 1.7.0? I am still trying to understand what exactly the problem with dtype is in _index_fields(). Would you suggest to keep using the view, or somehow revert to the old behavior while still trying to pass all the new tests in your PR 350? If you have any hints, it would save me some time. Thanks, Ondrej From fperez.net at gmail.com Wed Aug 29 22:57:22 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 29 Aug 2012 19:57:22 -0700 Subject: [Numpy-discussion] A sad day for our community. John Hunter: 1968-2012. Message-ID: Dear friends and colleagues, [please excuse a possible double-post of this message, in-flight internet glitches] I am terribly saddened to report that yesterday, August 28 2012 at 10am, John D. Hunter died from complications arising from cancer treatment at the University of Chicago hospital, after a brief but intense battle with this terrible illness. John is survived by his wife Miriam, his three daughters Rahel, Ava and Clara, his sisters Layne and Mary, and his mother Sarah. Note: If you decide not to read any further (I know this is a long message), please go to this page for some important information about how you can thank John for everything he gave in a decade of generous contributions to the Python and scientific communities: http://numfocus.org/johnhunter. Just a few weeks ago, John delivered his keynote address at the SciPy 2012 conference in Austin centered around the evolution of matplotlib: http://www.youtube.com/watch?v=e3lTby5RI54 but tragically, shortly after his return home he was diagnosed with advanced colon cancer. This diagnosis was a terrible discovery to us all, but John took it with his usual combination of calm and resolve, and initiated treatment procedures. Unfortunately, the first round of chemotherapy treatments led to severe complications that sent him to the intensive care unit, and despite the best efforts of the University of Chicago medical center staff, he never fully recovered from these. Yesterday morning, he died peacefully at the hospital with his loved ones at his bedside. John fought with grace and courage, enduring every necessary procedure with a smile on his face and a kind word for all of his caretakers and becoming a loved patient of the many teams that ended up involved with his case. This was no surprise for those of us who knew him, but he clearly left a deep and lasting mark even amongst staff hardened by the rigors of oncology floors and intensive care units. I don't need to explain to this community the impact of John's work, but allow me to briefly recap, in case this is read by some who don't know the whole story. In 2002, John was a postdoc at the University of Chicago hospital working on the analysis of epilepsy seizure data in children. Frustrated with the state of the existing proprietary solutions for this class of problems, he started using Python for his work, back when the scientific Python ecosystem was much, much smaller than it is today and this could have been seen as a crazy risk. Furthermore, he found that there were many half-baked solutions for data visualization in Python at the time, but none that truly met his needs. Undeterred, he went on to create matplotlib (http://matplotlib.org) and thus overcome one of the key obstacles for Python to become the best solution for open source scientific and technical computing. Matplotlib is both an amazing technical achievement and a shining example of open source community building, as John not only created its backbone but also fostered the development of a very strong development team, ensuring that the talent of many others could also contribute to this project. The value and importance of this are now painfully clear: despite having lost John, matplotlib continues to thrive thanks to the leadership of Michael Droetboom, the support of Perry Greenfield at the Hubble Telescope Science Institute, and the daily work of the rest of the team. I want to thank Perry and Michael for putting their resources and talent once more behind matplotlib, securing the future of the project. It is difficult to overstate the value and importance of matplotlib, and therefore of John's contributions (which do not end in matplotlib, by the way; but a biography will have to wait for another day...). Python has become a major force in the technical and scientific computing world, leading the open source offers and challenging expensive proprietary platforms with large teams and millions of dollars of resources behind them. But this would be impossible without a solid data visualization tool that would allow both ad-hoc data exploration and the production of complex, fine-tuned figures for papers, reports or websites. John had the vision to make matplotlib easy to use, but powerful and flexible enough to work in graphical user interfaces and as a server-side library, enabling a myriad use cases beyond his personal needs. This means that now, matplotlib powers everything from plots in dissertations and journal articles to custom data analysis projects and websites. And despite having left his academic career a few years ago for a job in industry, he remained engaged enough that as of today, he is still the top committer to matplotlib; this is the git shortlog of those with more than 1000 commits to the project: 2145 John Hunter 2130 Michael Droettboom 1060 Eric Firing All of this was done by a man who had three children to raise and who still always found the time to help those on the mailing lists, solve difficult technical problems in matplotlib, teach courses and seminars about scientific Python, and more recently help create the NumFOCUS foundation project. Despite the challenges that raising three children in an expensive city like Chicago presented, he never once wavered from his commitment to open source. But unfortunately now he is not here anymore to continue providing for their well-being, and I hope that all those who have so far benefited from his generosity, will thank this wonderful man who always gave far more than he received. Thanks to the rapid action of Travis Oliphant, the NumFOCUS foundation is now acting as an escrow agent to accept donations that will go into a fund to support the education and care of his wonderful girls Rahel, Ava and Clara. If you have benefited from John's many contributions, please say thanks in the way that would matter most to him, by helping Miriam continue the task of caring for and educating Rahel, Ava and Clara. You will find all the information necessary to make a donation here: http://numfocus.org/johnhunter Remember that even a small donation helps! If all those who ever use matplotlib give just a little bit, in the long run I am sure that we can make a difference. If you are a company that benefits in a serious way from matplotlib, remember that John was a staunch advocate of keeping all scientific Python projects under the BSD license so that commercial users could benefit from them without worry. Please say thanks to John in a way commensurate with your resources (and check how much a yearly matlab license would cost you in case you have any doubts about the value you are getting...). John's family is planning a private burial in Tennessee, but (most likely in September) there will also be a memorial service in Chicago that friends and members of the community can attend. We don't have the final scheduling details at this point, but I will post them once we know. I would like to again express my gratitude to Travis Oliphant for moving quickly with the setup of the donation support, and to Eric Jones (the founder of Enthought and another one of the central figures in our community) who immediately upon learning of John's plight contributed resources to support the family with everyday logistics while John was facing treatment as well as my travel to Chicago to assist. This kind of immediate urge to come to the help of others that Eric and Travis displayed is a hallmark of our community. Before closing, I want to take a moment to publicly thank the incredible staff of the University of Chicago medical center. The last two weeks were an intense and brutal ordeal for John and his loved ones, but the hospital staff offered a sometimes hard to believe, unending supply of generosity, care and humanity in addition to their technical competence. The latter is something we expect from a first-rate hospital at a top university, where the attending physicians can be world-renowned specialists in their field. But the former is often forgotten in a world often ruled by a combination of science and concerns about regulations and liability. Instead, we found generous and tireless staff who did everything in their power to ease the pain, always putting our well being ahead of any mindless adherence to protocol, patiently tending to every need we had and working far beyond their stated responsibilities to support us. To name only one person (and many others are equally deserving), I want to thank Dr. Carla Moreira, chief surgical resident, who spent the last few hours of John's life with us despite having just completed a solid night shift of surgical work. Instead of resting she came to the ICU and worked to ensure that those last hours were as comfortable as possible for John; her generous actions helped us through a very difficult moment. It is now time to close this already too long message... John, thanks for everything you gave all of us, and for the privilege of knowing you. Fernando. ps - I have sent this with my 'mailing lists' email. If you need to contact me directly for anything regarding the above, please write to my regular address at Fernando.Perez at berkeley.edu, where I do my best to reply more promptly. From ndbecker2 at gmail.com Thu Aug 30 07:49:55 2012 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 30 Aug 2012 07:49:55 -0400 Subject: [Numpy-discussion] broadcasting question Message-ID: I think this should be simple, but I'm drawing a blank I have 2 2d matrixes Matrix A has indexes (i, symbol) Matrix B has indexes (state, symbol) I combined them into a 3d matrix: C = A[:,newaxis,:] + B[newaxis,:,:] where C has indexes (i, state, symbol) That works fine. Now suppose I want to omit B (for debug), like: C = A[:,newaxis,:] In other words, all I want is to add a dimension into A and force it to broadcast along that axis. How do I do that? From ben.root at ou.edu Thu Aug 30 08:01:55 2012 From: ben.root at ou.edu (Benjamin Root) Date: Thu, 30 Aug 2012 08:01:55 -0400 Subject: [Numpy-discussion] broadcasting question In-Reply-To: References: Message-ID: On Thursday, August 30, 2012, Neal Becker wrote: > I think this should be simple, but I'm drawing a blank > > I have 2 2d matrixes > > Matrix A has indexes (i, symbol) > Matrix B has indexes (state, symbol) > > I combined them into a 3d matrix: > > C = A[:,newaxis,:] + B[newaxis,:,:] > where C has indexes (i, state, symbol) > > That works fine. > > Now suppose I want to omit B (for debug), like: > > C = A[:,newaxis,:] > > In other words, all I want is to add a dimension into A and force it to > broadcast along that axis. How do I do that? > > np.tile would help you there, I think. Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Aug 30 08:21:16 2012 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 30 Aug 2012 13:21:16 +0100 Subject: [Numpy-discussion] broadcasting question In-Reply-To: References: Message-ID: On Thu, Aug 30, 2012 at 12:49 PM, Neal Becker wrote: > I think this should be simple, but I'm drawing a blank > > I have 2 2d matrixes > > Matrix A has indexes (i, symbol) > Matrix B has indexes (state, symbol) > > I combined them into a 3d matrix: > > C = A[:,newaxis,:] + B[newaxis,:,:] > where C has indexes (i, state, symbol) > > That works fine. > > Now suppose I want to omit B (for debug), like: > > C = A[:,newaxis,:] > > In other words, all I want is to add a dimension into A and force it to > broadcast along that axis. How do I do that? C, dummy = numpy.broadcast_arrays(A[:,newaxis,:], numpy.empty([1,state,1])) -- Robert Kern From fperez.net at gmail.com Wed Aug 29 22:32:23 2012 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 29 Aug 2012 19:32:23 -0700 Subject: [Numpy-discussion] A sad day for our community. John Hunter: 1968-2012. Message-ID: Dear friends and colleagues, I am terribly saddened to report that yesterday, August 28 2012 at 10am, John D. Hunter died from complications arising from cancer treatment at the University of Chicago hospital, after a brief but intense battle with this terrible illness. John is survived by his wife Miriam, his three daughters Rahel, Ava and Clara, his sisters Layne and Mary, and his mother Sarah. Note: If you decide not to read any further (I know this is a long message), please go to this page for some important information about how you can thank John for everything he gave in a decade of generous contributions to the Python and scientific communities: http://numfocus.org/johnhunter. Just a few weeks ago, John delivered his keynote address at the SciPy 2012 conference in Austin centered around the evolution of matplotlib: http://www.youtube.com/watch?v=e3lTby5RI54 but tragically, shortly after his return home he was diagnosed with advanced colon cancer. This diagnosis was a terrible discovery to us all, but John took it with his usual combination of calm and resolve, and initiated treatment procedures. Unfortunately, the first round of chemotherapy treatments led to severe complications that sent him to the intensive care unit, and despite the best efforts of the University of Chicago medical center staff, he never fully recovered from these. Yesterday morning, he died peacefully at the hospital with his loved ones at his bedside. John fought with grace and courage, enduring every necessary procedure with a smile on his face and a kind word for all of his caretakers and becoming a loved patient of the many teams that ended up involved with his case. This was no surprise for those of us who knew him, but he clearly left a deep and lasting mark even amongst staff hardened by the rigors of oncology floors and intensive care units. I don't need to explain to this community the impact of John's work, but allow me to briefly recap, in case this is read by some who don't know the whole story. In 2002, John was a postdoc at the University of Chicago hospital working on the analysis of epilepsy seizure data in children. Frustrated with the state of the existing proprietary solutions for this class of problems, he started using Python for his work, back when the scientific Python ecosystem was much, much smaller than it is today and this could have been seen as a crazy risk. Furthermore, he found that there were many half-baked solutions for data visualization in Python at the time, but none that truly met his needs. Undeterred, he went on to create matplotlib (http://matplotlib.org) and thus overcome one of the key obstacles for Python to become the best solution for open source scientific and technical computing. Matplotlib is both an amazing technical achievement and a shining example of open source community building, as John not only created its backbone but also fostered the development of a very strong development team, ensuring that the talent of many others could also contribute to this project. The value and importance of this are now painfully clear: despite having lost John, matplotlib continues to thrive thanks to the leadership of Michael Droetboom, the support of Perry Greenfield at the Hubble Telescope Science Institute, and the daily work of the rest of the team. I want to thank Perry and Michael for putting their resources and talent once more behind matplotlib, securing the future of the project. It is difficult to overstate the value and importance of matplotlib, and therefore of John's contributions (which do not end in matplotlib, by the way; but a biography will have to wait for another day...). Python has become a major force in the technical and scientific computing world, leading the open source offers and challenging expensive proprietary platforms with large teams and millions of dollars of resources behind them. But this would be impossible without a solid data visualization tool that would allow both ad-hoc data exploration and the production of complex, fine-tuned figures for papers, reports or websites. John had the vision to make matplotlib easy to use, but powerful and flexible enough to work in graphical user interfaces and as a server-side library, enabling a myriad use cases beyond his personal needs. This means that now, matplotlib powers everything from plots in dissertations and journal articles to custom data analysis projects and websites. And despite having left his academic career a few years ago for a job in industry, he remained engaged enough that as of today, he is still the top committer to matplotlib; this is the git shortlog of those with more than 1000 commits to the project: 2145 John Hunter 2130 Michael Droettboom 1060 Eric Firing All of this was done by a man who had three children to raise and who still always found the time to help those on the mailing lists, solve difficult technical problems in matplotlib, teach courses and seminars about scientific Python, and more recently help create the NumFOCUS foundation project. Despite the challenges that raising three children in an expensive city like Chicago presented, he never once wavered from his commitment to open source. But unfortunately now he is not here anymore to continue providing for their well-being, and I hope that all those who have so far benefited from his generosity, will thank this wonderful man who always gave far more than he received. Thanks to the rapid action of Travis Oliphant, the NumFOCUS foundation is now acting as an escrow agent to accept donations that will go into a fund to support the education and care of his wonderful girls Rahel, Ava and Clara. If you have benefited from John's many contributions, please say thanks in the way that would matter most to him, by helping Miriam continue the task of caring for and educating Rahel, Ava and Clara. You will find all the information necessary to make a donation here: http://numfocus.org/johnhunter Remember that even a small donation helps! If all those who ever use matplotlib give just a little bit, in the long run I am sure that we can make a difference. If you are a company that benefits in a serious way from matplotlib, remember that John was a staunch advocate of keeping all scientific Python projects under the BSD license so that commercial users could benefit from them without worry. Please say thanks to John in a way commensurate with your resources (and check how much a yearly matlab license would cost you in case you have any doubts about the value you are getting...). John's family is planning a private burial in Tennessee, but (most likely in September) there will also be a memorial service in Chicago that friends and members of the community can attend. We don't have the final scheduling details at this point, but I will post them once we know. I would like to again express my gratitude to Travis Oliphant for moving quickly with the setup of the donation support, and to Eric Jones (the founder of Enthought and another one of the central figures in our community) who immediately upon learning of John's plight contributed resources to support the family with everyday logistics while John was facing treatment as well as my travel to Chicago to assist. This kind of immediate urge to come to the help of others that Eric and Travis displayed is a hallmark of our community. Before closing, I want to take a moment to publicly thank the incredible staff of the University of Chicago medical center. The last two weeks were an intense and brutal ordeal for John and his loved ones, but the hospital staff offered a sometimes hard to believe, unending supply of generosity, care and humanity in addition to their technical competence. The latter is something we expect from a first-rate hospital at a top university, where the attending physicians can be world-renowned specialists in their field. But the former is often forgotten in a world often ruled by a combination of science and concerns about regulations and liability. Instead, we found generous and tireless staff who did everything in their power to ease the pain, always putting our well being ahead of any mindless adherence to protocol, patiently tending to every need we had and working far beyond their stated responsibilities to support us. To name only one person (and many others are equally deserving), I want to thank Dr. Carla Moreira, chief surgical resident, who spent the last few hours of John's life with us despite having just completed a solid night shift of surgical work. Instead of resting she came to the ICU and worked to ensure that those last hours were as comfortable as possible for John; her generous actions helped us through a very difficult moment. It is now time to close this already too long message... John, thanks for everything you gave all of us, and for the privilege of knowing you. Fernando. ps - I have sent this with my 'mailing lists' email. If you need to contact me directly for anything regarding the above, please write to my regular address at Fernando.Perez at berkeley.edu, where I do my best to reply more promptly. From sturla at molden.no Thu Aug 30 11:58:59 2012 From: sturla at molden.no (Sturla Molden) Date: Thu, 30 Aug 2012 17:58:59 +0200 Subject: [Numpy-discussion] A sad day for our community. John Hunter: 1968-2012. In-Reply-To: References: Message-ID: <503F8DC3.4050308@molden.no> This is sad news for neuroscience and everyone doing data visualization in Python. Dr. Hunter was not only a well renowned neuroscientist, he also created what I hold to be among the best 2D data visualization tools available. My next neuroscience paper that uses Matplotlib will mention Dr. Hunter in the Acknowledgement. I encourage everyone else who are using Matplotlib for their research to do the same. Sturla Molden Ph.D. On 30.08.2012 04:57, Fernando Perez wrote: > Dear friends and colleagues, > > [please excuse a possible double-post of this message, in-flight > internet glitches] > > I am terribly saddened to report that yesterday, August 28 2012 at > 10am, John D. Hunter died from complications arising from cancer > treatment at the University of Chicago hospital, after a brief but > intense battle with this terrible illness. John is survived by his > wife Miriam, his three daughters Rahel, Ava and Clara, his sisters > Layne and Mary, and his mother Sarah. > > Note: If you decide not to read any further (I know this is a long > message), please go to this page for some important information about > how you can thank John for everything he gave in a decade of generous > contributions to the Python and scientific communities: > http://numfocus.org/johnhunter. > > Just a few weeks ago, John delivered his keynote address at the SciPy > 2012 conference in Austin centered around the evolution of matplotlib: > > http://www.youtube.com/watch?v=e3lTby5RI54 > > but tragically, shortly after his return home he was diagnosed with > advanced colon cancer. This diagnosis was a terrible discovery to us > all, but John took it with his usual combination of calm and resolve, > and initiated treatment procedures. Unfortunately, the first round of > chemotherapy treatments led to severe complications that sent him to > the intensive care unit, and despite the best efforts of the > University of Chicago medical center staff, he never fully recovered > from these. Yesterday morning, he died peacefully at the hospital > with his loved ones at his bedside. John fought with grace and > courage, enduring every necessary procedure with a smile on his face > and a kind word for all of his caretakers and becoming a loved patient > of the many teams that ended up involved with his case. This was no > surprise for those of us who knew him, but he clearly left a deep and > lasting mark even amongst staff hardened by the rigors of oncology > floors and intensive care units. > > I don't need to explain to this community the impact of John's work, > but allow me to briefly recap, in case this is read by some who don't > know the whole story. In 2002, John was a postdoc at the University > of Chicago hospital working on the analysis of epilepsy seizure data > in children. Frustrated with the state of the existing proprietary > solutions for this class of problems, he started using Python for his > work, back when the scientific Python ecosystem was much, much smaller > than it is today and this could have been seen as a crazy risk. > Furthermore, he found that there were many half-baked solutions for > data visualization in Python at the time, but none that truly met his > needs. Undeterred, he went on to create matplotlib > (http://matplotlib.org) and thus overcome one of the key obstacles for > Python to become the best solution for open source scientific and > technical computing. Matplotlib is both an amazing technical > achievement and a shining example of open source community building, > as John not only created its backbone but also fostered the > development of a very strong development team, ensuring that the > talent of many others could also contribute to this project. The > value and importance of this are now painfully clear: despite having > lost John, matplotlib continues to thrive thanks to the leadership of > Michael Droetboom, the support of Perry Greenfield at the Hubble > Telescope Science Institute, and the daily work of the rest of the > team. I want to thank Perry and Michael for putting their resources > and talent once more behind matplotlib, securing the future of the > project. > > It is difficult to overstate the value and importance of matplotlib, > and therefore of John's contributions (which do not end in matplotlib, > by the way; but a biography will have to wait for another day...). > Python has become a major force in the technical and scientific > computing world, leading the open source offers and challenging > expensive proprietary platforms with large teams and millions of > dollars of resources behind them. But this would be impossible without > a solid data visualization tool that would allow both ad-hoc data > exploration and the production of complex, fine-tuned figures for > papers, reports or websites. John had the vision to make matplotlib > easy to use, but powerful and flexible enough to work in graphical > user interfaces and as a server-side library, enabling a myriad use > cases beyond his personal needs. This means that now, matplotlib > powers everything from plots in dissertations and journal articles to > custom data analysis projects and websites. And despite having left > his academic career a few years ago for a job in industry, he remained > engaged enough that as of today, he is still the top committer to > matplotlib; this is the git shortlog of those with more than 1000 > commits to the project: > > 2145 John Hunter > 2130 Michael Droettboom > 1060 Eric Firing > > All of this was done by a man who had three children to raise and who > still always found the time to help those on the mailing lists, solve > difficult technical problems in matplotlib, teach courses and seminars > about scientific Python, and more recently help create the NumFOCUS > foundation project. Despite the challenges that raising three > children in an expensive city like Chicago presented, he never once > wavered from his commitment to open source. But unfortunately now he > is not here anymore to continue providing for their well-being, and I > hope that all those who have so far benefited from his generosity, > will thank this wonderful man who always gave far more than he > received. Thanks to the rapid action of Travis Oliphant, the NumFOCUS > foundation is now acting as an escrow agent to accept donations that > will go into a fund to support the education and care of his wonderful > girls Rahel, Ava and Clara. > > If you have benefited from John's many contributions, please say > thanks in the way that would matter most to him, by helping Miriam > continue the task of caring for and educating Rahel, Ava and Clara. > You will find all the information necessary to make a donation here: > > http://numfocus.org/johnhunter > > Remember that even a small donation helps! If all those who ever use > matplotlib give just a little bit, in the long run I am sure that we > can make a difference. > > If you are a company that benefits in a serious way from matplotlib, > remember that John was a staunch advocate of keeping all scientific > Python projects under the BSD license so that commercial users could > benefit from them without worry. Please say thanks to John in a way > commensurate with your resources (and check how much a yearly matlab > license would cost you in case you have any doubts about the value you > are getting...). > > John's family is planning a private burial in Tennessee, but (most > likely in September) there will also be a memorial service in Chicago > that friends and members of the community can attend. We don't have > the final scheduling details at this point, but I will post them once > we know. > > I would like to again express my gratitude to Travis Oliphant for > moving quickly with the setup of the donation support, and to Eric > Jones (the founder of Enthought and another one of the central figures > in our community) who immediately upon learning of John's plight > contributed resources to support the family with everyday logistics > while John was facing treatment as well as my travel to Chicago to > assist. This kind of immediate urge to come to the help of others > that Eric and Travis displayed is a hallmark of our community. > > Before closing, I want to take a moment to publicly thank the > incredible staff of the University of Chicago medical center. The > last two weeks were an intense and brutal ordeal for John and his > loved ones, but the hospital staff offered a sometimes hard to > believe, unending supply of generosity, care and humanity in addition > to their technical competence. The latter is something we expect from > a first-rate hospital at a top university, where the attending > physicians can be world-renowned specialists in their field. But the > former is often forgotten in a world often ruled by a combination of > science and concerns about regulations and liability. Instead, we > found generous and tireless staff who did everything in their power to > ease the pain, always putting our well being ahead of any mindless > adherence to protocol, patiently tending to every need we had and > working far beyond their stated responsibilities to support us. To > name only one person (and many others are equally deserving), I want > to thank Dr. Carla Moreira, chief surgical resident, who spent the > last few hours of John's life with us despite having just completed a > solid night shift of surgical work. Instead of resting she came to > the ICU and worked to ensure that those last hours were as comfortable > as possible for John; her generous actions helped us through a very > difficult moment. > > It is now time to close this already too long message... > > John, thanks for everything you gave all of us, and for the privilege > of knowing you. > > > Fernando. > > ps - I have sent this with my 'mailing lists' email. If you need to > contact me directly for anything regarding the above, please write to > my regular address at Fernando.Perez at berkeley.edu, where I do my best > to reply more promptly. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From ondrej.certik at gmail.com Thu Aug 30 20:05:34 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Thu, 30 Aug 2012 17:05:34 -0700 Subject: [Numpy-discussion] Temporary error accessing NumPy tickets Message-ID: Hi, When I access tickets, for example: http://projects.scipy.org/numpy/ticket/2185 then sometimes I get: Trac detected an internal error: OperationalError: database is locked For example yesterday. A refresh in about a minute fixed the problem. Today it still lasts at the moment. Ondrej From toddb at nvr.com Thu Aug 30 20:39:06 2012 From: toddb at nvr.com (Todd Brunhoff) Date: Thu, 30 Aug 2012 17:39:06 -0700 Subject: [Numpy-discussion] installing numpy in jython (in Netbeans) In-Reply-To: References: <503AB2E8.3050502@nvr.com> <503AEF38.1010502@nvr.com> Message-ID: <504007AA.3000904@nvr.com> On 8/27/2012 9:51 AM, Chris Barker wrote: > On Sun, Aug 26, 2012 at 8:53 PM, Todd Brunhoff wrote: >> Chris, >> winpdb is ok, although it is only a graphic debugger, not an ide, emphasis >> on the 'd'. > yup -- I mentioned, that as you seem to like NB -- and I know I try to > use the same editor for eveything. > > But if you want a nice full-on IDE for Python, there are a lot of > them. I"m an editor_termal guy, so I can't make a recommendation, but > some of the biggies are: > > Eclipse+PyDev > PyCharm > WindIDE > Spyder (particularly nice for numpy/ matplotlib, etc) I had not considered these yet, but they look interesting. I ended up here: http://en.wikipedia.org/wiki/Comparison_of_integrated_development_environments#Python which compares IDEs for every language, and found a free python plugin for VS 2010 which looks excellent. I may also try Spyder since I would expect you atmospheric guys would know where numericals are well integrated. Thanks. Todd > > -Chris > > From ondrej.certik at gmail.com Thu Aug 30 21:03:12 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Thu, 30 Aug 2012 18:03:12 -0700 Subject: [Numpy-discussion] Debian/Ubuntu patch help (was: ANN: NumPy 1.6.2 release candidate 1) In-Reply-To: References: Message-ID: On Tue, May 15, 2012 at 11:52 AM, Ralf Gommers wrote: > > > On Sat, May 12, 2012 at 9:17 PM, Ralf Gommers > wrote: >> >> >> >> On Sat, May 12, 2012 at 6:22 PM, Sandro Tosi wrote: >>> >>> Hello, >>> >>> On Sat, May 5, 2012 at 8:15 PM, Ralf Gommers >>> wrote: >>> > Hi, >>> > >>> > I'm pleased to announce the availability of the first release candidate >>> > of >>> > NumPy 1.6.2. This is a maintenance release. Due to the delay of the >>> > NumPy >>> > 1.7.0, this release contains far more fixes than a regular NumPy bugfix >>> > release. It also includes a number of documentation and build >>> > improvements. >>> > >>> > Sources and binary installers can be found at >>> > https://sourceforge.net/projects/numpy/files/NumPy/1.6.2rc1/ >>> > >>> > Please test this release and report any issues on the numpy-discussion >>> > mailing list. >>> ... >>> > BLD: add support for the new X11 directory structure on Ubuntu & co. >>> >>> We've just discovered that this fix is not enough. Actually the new >>> directories are due to the "multi-arch" feature of Debian systems, >>> that allows to install libraries from other (foreign) architectures >>> than the one the machine is (the classic example, i386 libraries on a >>> amd64 host). >>> >>> the fix included to look up in additional directories is currently >>> only for X11, while for example Debian has fftw3 that's >>> multi-arch-ified and thus will fail to be detected. >>> >>> Could this fix be extended to include all other things that are >>> checked? for reference the bug in Debian is [1]; there was also a >>> patch[2] in previous versions, that was using gcc to get the >>> multi-arch paths - you might use as a reference, or to implement >>> something debian-systems-specific. >>> >>> [1] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=640940 >>> [2] >>> http://anonscm.debian.org/viewvc/python-modules/packages/numpy/trunk/debian/patches/50_search-multiarch-paths.patch?view=markup&pathrev=21168 >>> >>> It would be awesome is such support would end up in 1.6.2 . >> >> >> Hardcoding some more paths to check in distutils/system_info.py should be >> OK, also for 1.6.2 (will require a new RC). >> >> The --print-multiarch thing looks very questionable. As far as I can tell, >> it's a Debian specific gcc patch, only available in gcc 4.6 and up. Ubuntu >> before 11.10 release also doesn't have it. Therefore I don't think use of >> --print-multiarch is appropriate for numpy for now, and certainly not a >> change I'd like to make to distutils right before a release. >> >> If anyone with access to a Debian/Ubuntu system could come up with a patch >> which adds the right paths to system_info.py, that would be great. > > > Hi, if there's anyone wants to have a look at the above issue this week, > that would be great. > > If there's a patch by this weekend I can create a second RC, so we can still > have the final release before the end of this month (needed for Debian > freeze). Otherwise a second RC won't be needed. For NumPy 1.7.0, the issue is fixed for X11 by the following lines: if os.path.exists('/usr/lib/X11'): globbed_x11_dir = glob('/usr/lib/*/libX11.so') if globbed_x11_dir: x11_so_dir = os.path.split(globbed_x11_dir[0])[0] default_x11_lib_dirs.extend([x11_so_dir, '/usr/lib/X11']) default_x11_include_dirs.extend(['/usr/lib/X11/include', '/usr/include/X11']) in numpy/distutils/system_info.py, there is still an issue of supporting Debian multi-arch fully: http://projects.scipy.org/numpy/ticket/2150 However, I don't understand what exactly it means. Ralf, would would be a canonical example to fix? If I use for example x11, I get: In [1]: from numpy.distutils.system_info import get_info In [2]: get_info("x11", 2) /home/ondrej/repos/numpy/py27/lib/python2.7/site-packages/numpy/distutils/system_info.py:551: UserWarning: Specified path /usr/X11R6/lib64 is invalid. warnings.warn('Specified path %s is invalid.' % d) /home/ondrej/repos/numpy/py27/lib/python2.7/site-packages/numpy/distutils/system_info.py:551: UserWarning: Specified path /usr/X11R6/lib is invalid. warnings.warn('Specified path %s is invalid.' % d) /home/ondrej/repos/numpy/py27/lib/python2.7/site-packages/numpy/distutils/system_info.py:551: UserWarning: Specified path /usr/X11/lib64 is invalid. warnings.warn('Specified path %s is invalid.' % d) /home/ondrej/repos/numpy/py27/lib/python2.7/site-packages/numpy/distutils/system_info.py:551: UserWarning: Specified path /usr/X11/lib is invalid. warnings.warn('Specified path %s is invalid.' % d) /home/ondrej/repos/numpy/py27/lib/python2.7/site-packages/numpy/distutils/system_info.py:551: UserWarning: Specified path /usr/lib64 is invalid. warnings.warn('Specified path %s is invalid.' % d) /home/ondrej/repos/numpy/py27/lib/python2.7/site-packages/numpy/distutils/system_info.py:551: UserWarning: Specified path /usr/X11R6/include is invalid. warnings.warn('Specified path %s is invalid.' % d) /home/ondrej/repos/numpy/py27/lib/python2.7/site-packages/numpy/distutils/system_info.py:551: UserWarning: Specified path /usr/X11/include is invalid. warnings.warn('Specified path %s is invalid.' % d) /home/ondrej/repos/numpy/py27/lib/python2.7/site-packages/numpy/distutils/system_info.py:551: UserWarning: Specified path /usr/lib/X11/include is invalid. warnings.warn('Specified path %s is invalid.' % d) Out[2]: {'include_dirs': ['/usr/include'], 'libraries': ['X11'], 'library_dirs': ['/usr/lib/x86_64-linux-gnu']} I am using Ubuntu 12.04. Is the task to remove the warnings, or is the task to fix it for some other package from the get_info() list (which one)? Thanks, Ondrej From ondrej.certik at gmail.com Thu Aug 30 23:10:49 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Thu, 30 Aug 2012 20:10:49 -0700 Subject: [Numpy-discussion] Access to SPARC 64 Message-ID: Hi, Does anyone have a SPARC 64 machine that I could have an access to, so that I can try to reproduce and fix the following issue? http://projects.scipy.org/numpy/ticket/2076 That would be greatly appreciated, as it is currently marked as a blocker for 1.7.0. Thanks, Ondrej From jason-sage at creativetrax.com Thu Aug 30 23:21:34 2012 From: jason-sage at creativetrax.com (Jason Grout) Date: Thu, 30 Aug 2012 22:21:34 -0500 Subject: [Numpy-discussion] Access to SPARC 64 In-Reply-To: References: Message-ID: <50402DBE.8020906@creativetrax.com> On 8/30/12 10:10 PM, Ond?ej ?ert?k wrote: > Hi, > > Does anyone have a SPARC 64 machine that I could have an access to, so > that I can try to reproduce and fix the following issue? > > http://projects.scipy.org/numpy/ticket/2076 > > That would be greatly appreciated, as it is currently marked as a > blocker for 1.7.0. You might ask on sage-devel. They were just talking about SPARC machines the other day on sage-devel. Thanks, Jason From ondrej.certik at gmail.com Fri Aug 31 00:47:43 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Thu, 30 Aug 2012 21:47:43 -0700 Subject: [Numpy-discussion] Issues for 1.7.0 Message-ID: Hi, I am keeping track of all issues that need to be done for the 1.7.0 release here: https://github.com/numpy/numpy/issues/396 If you have trac and github push access, here is how you can help (by closing/merging): Issues that need clarification: http://projects.scipy.org/numpy/ticket/2150 http://projects.scipy.org/numpy/ticket/2101 Issues fixed (should be closed): http://projects.scipy.org/numpy/ticket/2185 http://projects.scipy.org/numpy/ticket/2066 http://projects.scipy.org/numpy/ticket/2189 PRs that need merging: https://github.com/numpy/numpy/pull/395 https://github.com/numpy/numpy/pull/397 There are still a few more (see my github issue above), that I am working on right now. Thanks, Ondrej From ondrej.certik at gmail.com Fri Aug 31 03:03:17 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Fri, 31 Aug 2012 00:03:17 -0700 Subject: [Numpy-discussion] How to debug reference counting errors Message-ID: Hi, There is segfault reported here: http://projects.scipy.org/numpy/ticket/1588 I've managed to isolate the problem and even provide a simple patch, that fixes it here: https://github.com/numpy/numpy/issues/398 however the patch simply doesn't decrease the proper reference, so it might leak. I've used bisection (took the whole evening unfortunately...) but the good news is that I've isolated commits that actually broke it. See the github issue #398 for details, diffs etc. Unfortunately, it's 12 commits from Mark and the individual commits raise exception on the segfaulting code, so I can't pin point the problem further. In general, how can I debug this sort of problem? I tried to use valgrind, with a debugging build of numpy, but it provides tons of false (?) positives: https://gist.github.com/3549063 Mark, by looking at the changes that broke it, as well as at my "fix", do you see where the problem could be? I suspect it is something with the changes in PyArray_FromAny() or PyArray_FromArray() in ctors.c. But I don't see anything so far that could cause it. Thanks for any help. This is one of the issues blocking the 1.7.0 release. Ondrej From rhattersley at gmail.com Fri Aug 31 04:08:02 2012 From: rhattersley at gmail.com (Richard Hattersley) Date: Fri, 31 Aug 2012 10:08:02 +0200 Subject: [Numpy-discussion] How to debug reference counting errors In-Reply-To: References: Message-ID: Hi, re: valgrind - to get better results you might try the suggestions from: http://svn.python.org/projects/python/trunk/Misc/README.valgrind Richard On 31 August 2012 09:03, Ond?ej ?ert?k wrote: > Hi, > > There is segfault reported here: > > http://projects.scipy.org/numpy/ticket/1588 > > I've managed to isolate the problem and even provide a simple patch, > that fixes it here: > > https://github.com/numpy/numpy/issues/398 > > however the patch simply doesn't decrease the proper reference, so it > might leak. I've used > bisection (took the whole evening unfortunately...) but the good news > is that I've isolated commits > that actually broke it. See the github issue #398 for details, diffs etc. > > Unfortunately, it's 12 commits from Mark and the individual commits > raise exception on the segfaulting code, > so I can't pin point the problem further. > > In general, how can I debug this sort of problem? I tried to use > valgrind, with a debugging build of numpy, > but it provides tons of false (?) positives: > https://gist.github.com/3549063 > > Mark, by looking at the changes that broke it, as well as at my "fix", > do you see where the problem could be? > > I suspect it is something with the changes in PyArray_FromAny() or > PyArray_FromArray() in ctors.c. > But I don't see anything so far that could cause it. > > Thanks for any help. This is one of the issues blocking the 1.7.0 release. > > Ondrej > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.walter at gmail.com Fri Aug 31 05:31:16 2012 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Fri, 31 Aug 2012 11:31:16 +0200 Subject: [Numpy-discussion] how is y += x computed when y.strides = (0, 8) and x.strides=(16, 8) ? Message-ID: Hi, I'm using numpy 1.6.1 on Ubuntu 12.04.1 LTS. A code that used to work with an older version of numpy now fails with an error. Were there any changes in the way inplace operations like +=, *=, etc. work on arrays with non-standard strides? For the script: ------- start of code ------- import numpy x = numpy.arange(6).reshape((3,2)) y = numpy.arange(2) print 'x=\n', x print 'y=\n', y u,v = numpy.broadcast_arrays(x, y) print 'u=\n', u print 'v=\n', v print 'v.strides=\n', v.strides v += u print 'v=\n', v # expectation: v = [[6,12], [6,12], [6,12]] print 'u=\n', u print 'y=\n', y # expectation: y = [6,12] ------- end of code ------- I get the output -------- start of output --------- x= [[0 1] [2 3] [4 5]] y= [0 1] u= [[0 1] [2 3] [4 5]] v= [[0 1] [0 1] [0 1]] v.strides= (0, 8) v= [[4 6] [4 6] [4 6]] u= [[0 1] [2 3] [4 5]] y= [4 6] -------- end of output -------- I would have expected that v += u performs an element-by-element += v[0,0] += u[0,0] # increments y[0] v[0,1] += u[0,1] # increments y[1] v[1,0] += u[1,0] # increments y[0] v[1,1] += u[1,1] # increments y[1] v[2,0] += u[2,0] # increments y[0] v[2,1] += u[2,1] # increments y[1] yielding the result y = [6,12] but instead one obtains y = [4, 6] which could be the result of v[2,0] += u[2,0] # increments y[0] v[2,1] += u[2,1] # increments y[1] Is this the intended behavior? regards, Sebastian From d.s.seljebotn at astro.uio.no Fri Aug 31 07:22:27 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Fri, 31 Aug 2012 13:22:27 +0200 Subject: [Numpy-discussion] How to debug reference counting errors In-Reply-To: References: Message-ID: <50409E73.5070900@astro.uio.no> On 08/31/2012 09:03 AM, Ond?ej ?ert?k wrote: > Hi, > > There is segfault reported here: > > http://projects.scipy.org/numpy/ticket/1588 > > I've managed to isolate the problem and even provide a simple patch, > that fixes it here: > > https://github.com/numpy/numpy/issues/398 > > however the patch simply doesn't decrease the proper reference, so it > might leak. I've used > bisection (took the whole evening unfortunately...) but the good news > is that I've isolated commits > that actually broke it. See the github issue #398 for details, diffs etc. > > Unfortunately, it's 12 commits from Mark and the individual commits > raise exception on the segfaulting code, > so I can't pin point the problem further. > > In general, how can I debug this sort of problem? I tried to use > valgrind, with a debugging build of numpy, > but it provides tons of false (?) positives: https://gist.github.com/3549063 > > Mark, by looking at the changes that broke it, as well as at my "fix", > do you see where the problem could be? > > I suspect it is something with the changes in PyArray_FromAny() or > PyArray_FromArray() in ctors.c. > But I don't see anything so far that could cause it. > > Thanks for any help. This is one of the issues blocking the 1.7.0 release. IIRC you can recompile Python with some support for detecting memory leaks. One of the issues with using Valgrind, after suppressing the false positives, is that Python uses its own memory allocator so that sits between the bug and what Valgrind detects. So at least recompile Python to not do that. As for hardening the NumPy source in general, you should at least be aware of these two options: 1) David Malcolm (dmalcolm at redhat.com) was writing a static code analysis plugin for gcc that would check every routine that the reference count semantics was correct. (I don't know how far he's got with that.) 2) In Cython we have a "reference count nanny". This requires changes to all the code though, so not an option just for finding this bug, just thought I'd mention it. In addition to the INCREF/DECREF you need to insert new "GIVEREF" and "GOTREF" calls (which are noops in a normal compile) to declare where you get and give away a reference. When Cython-generated sources are enabled with -DCYTHON_REFNANNY, INCREF/DECREF/GIVEREF/GOTREF are tracked within each function and a failure is raised if the function violates any contract. Dag From jay.bourque at continuum.io Fri Aug 31 09:15:39 2012 From: jay.bourque at continuum.io (Jay Bourque) Date: Fri, 31 Aug 2012 08:15:39 -0500 Subject: [Numpy-discussion] view of recarray issue In-Reply-To: References: Message-ID: Ondrej, Sorry for the delay in getting back to this. I have some free time today to get this resolved if you haven't already fixed it. -Jay On Wed, Aug 29, 2012 at 7:19 PM, Ond?ej ?ert?k wrote: > Jay, > > On Mon, Aug 20, 2012 at 12:40 PM, Ond?ej ?ert?k > wrote: > > On Wed, Jul 25, 2012 at 10:29 AM, Jay Bourque > wrote: > >> I'm actively looking at this issue since it was my pull request that > broke > >> this (https://github.com/numpy/numpy/pull/350). We definitely don't > want to > >> break this functionality for 1.7. The problem is that even though > indexing > >> with a subset of fields still returns a copy (for now), it now returns a > >> copy of a view of the original array. When you call copy() on a view, it > >> copies the entire original structured array with the view dtype. A short > >> term fix would be to "manually" create a proper copy to return similar > to > >> what _index_fields() did before my change, but since the idea is to > >> eventually return the view instead of a copy, long term we need a way > to do > >> a proper copy of a structured array view that doesn't copy the unwanted > >> fields. > > > > This should be fixed for 1.7.0. However, I am going to release beta now, > > and then see what we can do about this. > > What would be the best "short term" fix, so that we can release 1.7.0? > > I am still trying to understand what exactly the problem with dtype is > in _index_fields(). > Would you suggest to keep using the view, or somehow revert to the old > behavior while > still trying to pass all the new tests in your PR 350? If you have any > hints, > it would save me some time. > > Thanks, > Ondrej > -------------- next part -------------- An HTML attachment was scrubbed... URL: From morph at debian.org Fri Aug 31 09:18:44 2012 From: morph at debian.org (Sandro Tosi) Date: Fri, 31 Aug 2012 15:18:44 +0200 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: References: Message-ID: Hello, On Tue, Aug 21, 2012 at 6:24 PM, Ond?ej ?ert?k wrote: > Hi, > > I'm pleased to announce the availability of the first beta release of > NumPy 1.7.0b1. I've just uploaded it to Debian experimental, so we can give it a run while in freeze. Some of the buildds are already building[1] the package, so we should get results asap (either failures or successes). [1] https://buildd.debian.org/status/package.php?p=python-numpy&suite=experimental If tests fail, it won't stop the build, and indeed I got at least 2 errors (actually 1 error and 1 crash), when running tests for python 2.7 and 3.2 with debug enabled: 2.7 dbg ====================================================================== ERROR: test_power_zero (test_umath.TestPower) ---------------------------------------------------------------------- Traceback (most recent call last): File "/tmp/buildd/python-numpy-1.7.0~b1/debian/tmp/usr/lib/python2.7/dist-packages/numpy/core/tests/test_umath.py", line 139, in test_power_zero assert_complex_equal(np.power(zero, 0+1j), cnan) RuntimeWarning: invalid value encountered in power ---------------------------------------------------------------------- 3.2 dbg python3.2-dbg: numpy/core/src/multiarray/common.c:161: PyArray_DTypeFromObjectHelper: Assertion `((((((PyObject*)(temp))->ob_type))->tp_flags & ((1L<<27))) != 0)' failed. Aborted I'm reporting them here since you asked so, dunno if you want an issue on github to track them. I'll look at the buildds logs and report additional failures if they come up. Cheers, -- Sandro Tosi (aka morph, morpheus, matrixhasu) My website: http://matrixhasu.altervista.org/ Me at Debian: http://wiki.debian.org/SandroTosi From charlesr.harris at gmail.com Fri Aug 31 12:27:26 2012 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 31 Aug 2012 10:27:26 -0600 Subject: [Numpy-discussion] Issues for 1.7.0 In-Reply-To: References: Message-ID: On Thu, Aug 30, 2012 at 10:47 PM, Ond?ej ?ert?k wrote: > Hi, > > I am keeping track of all issues that need to be done for the 1.7.0 > release here: > > https://github.com/numpy/numpy/issues/396 > > If you have trac and github push access, here is how you can help (by > closing/merging): > > Issues that need clarification: > > http://projects.scipy.org/numpy/ticket/2150 > http://projects.scipy.org/numpy/ticket/2101 > > Issues fixed (should be closed): > > http://projects.scipy.org/numpy/ticket/2185 > http://projects.scipy.org/numpy/ticket/2066 > http://projects.scipy.org/numpy/ticket/2189 > > PRs that need merging: > > https://github.com/numpy/numpy/pull/395 > https://github.com/numpy/numpy/pull/397 > > > There are still a few more (see my github issue above), that I am > working on right now. > > Ondrej, It looks like you don't have commit rights. Is that the case? If you are the release manager I think you need both commit rights and the right to close tickets. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Aug 31 12:35:50 2012 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 31 Aug 2012 16:35:50 +0000 (UTC) Subject: [Numpy-discussion] Temporary error accessing NumPy tickets References: Message-ID: Ond?ej ?ert?k gmail.com> writes: > When I access tickets, for example: > > http://projects.scipy.org/numpy/ticket/2185 > > then sometimes I get: > > Trac detected an internal error: > OperationalError: database is locked > > For example yesterday. A refresh in about a minute fixed the problem. > Today it still lasts at the moment. The failures are probably partly triggered by the machine running out of memory. It runs services on mod_python, which apparently slowly leaks. Someone (who?) with root access on the machine needs to restart Apache. (Note: "apachectl graceful" is not enough to correct this, it needs a real restart of the process.) Longer term solution is to move out of mod_python (mod_wsgi likely, going to CGI will create other performance problems), or to transition the stuff there to a more beefy server. -- Pauli Virtanen From ondrej.certik at gmail.com Fri Aug 31 13:09:28 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Fri, 31 Aug 2012 10:09:28 -0700 Subject: [Numpy-discussion] view of recarray issue In-Reply-To: References: Message-ID: On Fri, Aug 31, 2012 at 6:15 AM, Jay Bourque wrote: > Ondrej, > > Sorry for the delay in getting back to this. I have some free time today to > get this resolved if you haven't already fixed it. I haven't. If you can look at it, that would be absolutely awesome. If you don't manage to fix it, if you can give me some hints what's going on, that would also be a huge help. Many thanks! Ondrej From ondrej.certik at gmail.com Fri Aug 31 13:10:14 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Fri, 31 Aug 2012 10:10:14 -0700 Subject: [Numpy-discussion] Issues for 1.7.0 In-Reply-To: References: Message-ID: On Fri, Aug 31, 2012 at 9:27 AM, Charles R Harris wrote: > > > On Thu, Aug 30, 2012 at 10:47 PM, Ond?ej ?ert?k > wrote: >> >> Hi, >> >> I am keeping track of all issues that need to be done for the 1.7.0 >> release here: >> >> https://github.com/numpy/numpy/issues/396 >> >> If you have trac and github push access, here is how you can help (by >> closing/merging): >> >> Issues that need clarification: >> >> http://projects.scipy.org/numpy/ticket/2150 >> http://projects.scipy.org/numpy/ticket/2101 >> >> Issues fixed (should be closed): >> >> http://projects.scipy.org/numpy/ticket/2185 >> http://projects.scipy.org/numpy/ticket/2066 >> http://projects.scipy.org/numpy/ticket/2189 >> >> PRs that need merging: >> >> https://github.com/numpy/numpy/pull/395 >> https://github.com/numpy/numpy/pull/397 >> >> >> There are still a few more (see my github issue above), that I am >> working on right now. >> > > Ondrej, > > It looks like you don't have commit rights. Is that the case? If you are the > release manager I think you need both commit rights and the right to close > tickets. Yes, I don't have commit rights nor the rights to close tickets. Ondrej From ondrej.certik at gmail.com Fri Aug 31 13:08:11 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Fri, 31 Aug 2012 10:08:11 -0700 Subject: [Numpy-discussion] Temporary error accessing NumPy tickets In-Reply-To: References: Message-ID: On Fri, Aug 31, 2012 at 9:35 AM, Pauli Virtanen wrote: > Ond?ej ?ert?k gmail.com> writes: >> When I access tickets, for example: >> >> http://projects.scipy.org/numpy/ticket/2185 >> >> then sometimes I get: >> >> Trac detected an internal error: >> OperationalError: database is locked >> >> For example yesterday. A refresh in about a minute fixed the problem. >> Today it still lasts at the moment. > > The failures are probably partly triggered by the machine running out of memory. > It runs services on mod_python, which apparently slowly leaks. Someone (who?) > with root access on the machine needs to restart Apache. (Note: "apachectl > graceful" is not enough to correct this, it needs a real restart of the process.) I see. > > Longer term solution is to move out of mod_python (mod_wsgi likely, going to CGI > will create other performance problems), or to transition the stuff there to a > more beefy server. Or move the tickets to github. Yesterday it was very unreliable (I had to wait a long time before a comment was posted, and about 50% of the time it was not posted due to the database error). So I just created a github issue for the same thing and posted my comments there. Then I could work fast. Ondrej From ondrej.certik at gmail.com Fri Aug 31 13:17:22 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Fri, 31 Aug 2012 10:17:22 -0700 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: References: Message-ID: Hi Sandro, On Fri, Aug 31, 2012 at 6:18 AM, Sandro Tosi wrote: > Hello, > > On Tue, Aug 21, 2012 at 6:24 PM, Ond?ej ?ert?k wrote: >> Hi, >> >> I'm pleased to announce the availability of the first beta release of >> NumPy 1.7.0b1. > > I've just uploaded it to Debian experimental, so we can give it a run > while in freeze. Some of the buildds are already building[1] the > package, so we should get results asap (either failures or successes). This is awesome, thanks you so much for doing this. This should reveal some bugs. > > [1] https://buildd.debian.org/status/package.php?p=python-numpy&suite=experimental > > If tests fail, it won't stop the build, and indeed I got at least 2 > errors (actually 1 error and 1 crash), when running tests for python > 2.7 and 3.2 with debug enabled: > > 2.7 dbg > > ====================================================================== > ERROR: test_power_zero (test_umath.TestPower) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/tmp/buildd/python-numpy-1.7.0~b1/debian/tmp/usr/lib/python2.7/dist-packages/numpy/core/tests/test_umath.py", > line 139, in test_power_zero > assert_complex_equal(np.power(zero, 0+1j), cnan) > RuntimeWarning: invalid value encountered in power > > ---------------------------------------------------------------------- > > 3.2 dbg > > python3.2-dbg: numpy/core/src/multiarray/common.c:161: > PyArray_DTypeFromObjectHelper: Assertion > `((((((PyObject*)(temp))->ob_type))->tp_flags & ((1L<<27))) != 0)' > failed. > Aborted > > I'm reporting them here since you asked so, dunno if you want an issue > on github to track them. I'll look at the buildds logs and report > additional failures if they come up. If you could create issues at github: https://github.com/numpy/numpy/issues that would be great. If you have time, also with some info about the platform and how to reproduce it. Or at least a link to the build logs. I'll add it to the release TODO and try to fix it. Ondrej From charlesr.harris at gmail.com Fri Aug 31 13:26:16 2012 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 31 Aug 2012 11:26:16 -0600 Subject: [Numpy-discussion] Issues for 1.7.0 In-Reply-To: References: Message-ID: On Fri, Aug 31, 2012 at 11:10 AM, Ond?ej ?ert?k wrote: > On Fri, Aug 31, 2012 at 9:27 AM, Charles R Harris > wrote: > > > > > > On Thu, Aug 30, 2012 at 10:47 PM, Ond?ej ?ert?k > > > wrote: > >> > >> Hi, > >> > >> I am keeping track of all issues that need to be done for the 1.7.0 > >> release here: > >> > >> https://github.com/numpy/numpy/issues/396 > >> > >> If you have trac and github push access, here is how you can help (by > >> closing/merging): > >> > >> Issues that need clarification: > >> > >> http://projects.scipy.org/numpy/ticket/2150 > >> http://projects.scipy.org/numpy/ticket/2101 > >> > >> Issues fixed (should be closed): > >> > >> http://projects.scipy.org/numpy/ticket/2185 > >> http://projects.scipy.org/numpy/ticket/2066 > >> http://projects.scipy.org/numpy/ticket/2189 > >> > >> PRs that need merging: > >> > >> https://github.com/numpy/numpy/pull/395 > >> https://github.com/numpy/numpy/pull/397 > >> > >> > >> There are still a few more (see my github issue above), that I am > >> working on right now. > >> > > > > Ondrej, > > > > It looks like you don't have commit rights. Is that the case? If you are > the > > release manager I think you need both commit rights and the right to > close > > tickets. > > Yes, I don't have commit rights nor the rights to close tickets. > > OK, I gave commit rights to you. Someone else (Pauli) will need to give you rights to close tickets. I think Thouis also needs rights if he is going to do the issue tracking. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej.certik at gmail.com Fri Aug 31 13:44:00 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Fri, 31 Aug 2012 10:44:00 -0700 Subject: [Numpy-discussion] Issues for 1.7.0 In-Reply-To: References: Message-ID: On Fri, Aug 31, 2012 at 10:26 AM, Charles R Harris wrote: > > > On Fri, Aug 31, 2012 at 11:10 AM, Ond?ej ?ert?k [...] >> Yes, I don't have commit rights nor the rights to close tickets. >> > > OK, I gave commit rights to you. Someone else (Pauli) will need to give you > rights to close tickets. I think Thouis also needs rights if he is going to > do the issue tracking. Thanks a lot. I just wrote to Pauli privately and CCed you. Ondrej From stefan-usenet at bytereef.org Fri Aug 31 13:58:57 2012 From: stefan-usenet at bytereef.org (Stefan Krah) Date: Fri, 31 Aug 2012 19:58:57 +0200 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: References: Message-ID: <20120831175857.GA2041@sleipnir.bytereef.org> Ond??ej ??ert??k wrote: > > python3.2-dbg: numpy/core/src/multiarray/common.c:161: > > PyArray_DTypeFromObjectHelper: Assertion > > `((((((PyObject*)(temp))->ob_type))->tp_flags & ((1L<<27))) != 0)' > > If you could create issues at github: https://github.com/numpy/numpy/issues > that would be great. If you have time, also with some info about the platform > and how to reproduce it. Or at least a link to the build logs. For the second one there's an issue here: http://projects.scipy.org/numpy/ticket/2193 Stefan Krah From jay.bourque at continuum.io Fri Aug 31 14:04:07 2012 From: jay.bourque at continuum.io (Jay Bourque) Date: Fri, 31 Aug 2012 13:04:07 -0500 Subject: [Numpy-discussion] view of recarray issue In-Reply-To: References: Message-ID: Ondrej, Just submitted the following pull request for this: https://github.com/numpy/numpy/pull/401 -Jay On Fri, Aug 31, 2012 at 12:09 PM, Ond?ej ?ert?k wrote: > On Fri, Aug 31, 2012 at 6:15 AM, Jay Bourque > wrote: > > Ondrej, > > > > Sorry for the delay in getting back to this. I have some free time today > to > > get this resolved if you haven't already fixed it. > > I haven't. If you can look at it, that would be absolutely awesome. > If you don't manage to fix it, if you can give me some hints what's > going on, that would also be a huge help. > > Many thanks! > Ondrej > -------------- next part -------------- An HTML attachment was scrubbed... URL: From morph at debian.org Fri Aug 31 14:07:55 2012 From: morph at debian.org (Sandro Tosi) Date: Fri, 31 Aug 2012 20:07:55 +0200 Subject: [Numpy-discussion] ANN: NumPy 1.7.0b1 release In-Reply-To: References: Message-ID: On Fri, Aug 31, 2012 at 7:17 PM, Ond?ej ?ert?k wrote: > If you could create issues at github: https://github.com/numpy/numpy/issues > that would be great. If you have time, also with some info about the platform > and how to reproduce it. Or at least a link to the build logs. I've reported it here: https://github.com/numpy/numpy/issues/402 Cheers, -- Sandro Tosi (aka morph, morpheus, matrixhasu) My website: http://matrixhasu.altervista.org/ Me at Debian: http://wiki.debian.org/SandroTosi From ognen at enthought.com Fri Aug 31 17:08:07 2012 From: ognen at enthought.com (Ognen Duzlevski) Date: Fri, 31 Aug 2012 16:08:07 -0500 Subject: [Numpy-discussion] Temporary error accessing NumPy tickets In-Reply-To: References: Message-ID: On Fri, Aug 31, 2012 at 11:35 AM, Pauli Virtanen wrote: > Ond?ej ?ert?k gmail.com> writes: >> When I access tickets, for example: >> >> http://projects.scipy.org/numpy/ticket/2185 >> >> then sometimes I get: >> >> Trac detected an internal error: >> OperationalError: database is locked >> >> For example yesterday. A refresh in about a minute fixed the problem. >> Today it still lasts at the moment. > > The failures are probably partly triggered by the machine running out of memory. > It runs services on mod_python, which apparently slowly leaks. Someone (who?) > with root access on the machine needs to restart Apache. (Note: "apachectl > graceful" is not enough to correct this, it needs a real restart of the process.) I do that regularly. > Longer term solution is to move out of mod_python (mod_wsgi likely, going to CGI > will create other performance problems), or to transition the stuff there to a > more beefy server. There is also Trac. Between Trac and mod_python the load on the machine goes up to 20+ at times. I spent some time trying to figure out a move of the current machine to Amazon to a beefier instance (and I am not opposed to it but there is a lot of cruft and strange setup on it as well as the fact that it is not really clear what is what and why it is running) but this would be a case of solving a problem by throwing more hardware at it. If everyone is OK with that, fine. I personally think moving away from Trac (which IMHO is bloated and awkward in addition to having a very weird way of being administered) would be a better idea. My $0.02 Ognen From pav at iki.fi Fri Aug 31 19:29:32 2012 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 01 Sep 2012 02:29:32 +0300 Subject: [Numpy-discussion] Temporary error accessing NumPy tickets In-Reply-To: References: Message-ID: 01.09.2012 00:08, Ognen Duzlevski kirjoitti: [clip] > I personally think moving away from Trac (which IMHO is bloated and > awkward in addition to having a very weird way of being administered) > would be a better idea. Yes, moving away from Trac is planned, both for Numpy and Scipy. Also agreed on the point of clumsy administration. This however leaves the other services still on the machine, although after dropping Trac, the juice probably is enough for them. -- Pauli Virtanen From ondrej.certik at gmail.com Fri Aug 31 20:35:49 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Fri, 31 Aug 2012 17:35:49 -0700 Subject: [Numpy-discussion] How to debug reference counting errors In-Reply-To: <50409E73.5070900@astro.uio.no> References: <50409E73.5070900@astro.uio.no> Message-ID: Hi Dag, On Fri, Aug 31, 2012 at 4:22 AM, Dag Sverre Seljebotn wrote: > On 08/31/2012 09:03 AM, Ond?ej ?ert?k wrote: >> Hi, >> >> There is segfault reported here: >> >> http://projects.scipy.org/numpy/ticket/1588 >> >> I've managed to isolate the problem and even provide a simple patch, >> that fixes it here: >> >> https://github.com/numpy/numpy/issues/398 >> >> however the patch simply doesn't decrease the proper reference, so it >> might leak. I've used >> bisection (took the whole evening unfortunately...) but the good news >> is that I've isolated commits >> that actually broke it. See the github issue #398 for details, diffs etc. >> >> Unfortunately, it's 12 commits from Mark and the individual commits >> raise exception on the segfaulting code, >> so I can't pin point the problem further. >> >> In general, how can I debug this sort of problem? I tried to use >> valgrind, with a debugging build of numpy, >> but it provides tons of false (?) positives: https://gist.github.com/3549063 >> >> Mark, by looking at the changes that broke it, as well as at my "fix", >> do you see where the problem could be? >> >> I suspect it is something with the changes in PyArray_FromAny() or >> PyArray_FromArray() in ctors.c. >> But I don't see anything so far that could cause it. >> >> Thanks for any help. This is one of the issues blocking the 1.7.0 release. > > IIRC you can recompile Python with some support for detecting memory > leaks. One of the issues with using Valgrind, after suppressing the > false positives, is that Python uses its own memory allocator so that > sits between the bug and what Valgrind detects. So at least recompile > Python to not do that. Right. Compiling with "--without-pymalloc" (per README.valgrind as suggested above by Richard) should improve things a lot. Thanks for the tip. > > As for hardening the NumPy source in general, you should at least be > aware of these two options: > > 1) David Malcolm (dmalcolm at redhat.com) was writing a static code > analysis plugin for gcc that would check every routine that the > reference count semantics was correct. (I don't know how far he's got > with that.) > > 2) In Cython we have a "reference count nanny". This requires changes to > all the code though, so not an option just for finding this bug, just > thought I'd mention it. In addition to the INCREF/DECREF you need to > insert new "GIVEREF" and "GOTREF" calls (which are noops in a normal > compile) to declare where you get and give away a reference. When > Cython-generated sources are enabled with -DCYTHON_REFNANNY, > INCREF/DECREF/GIVEREF/GOTREF are tracked within each function and a > failure is raised if the function violates any contract. I see. That's a nice option. For my own code, I never touch the reference counting by hand and rather just use Cython. In the meantime, Mark fixed it: https://github.com/numpy/numpy/pull/400 https://github.com/numpy/numpy/pull/405 Mark, thanks again for this. That saved me a lot of time. Ondrej From mwwiebe at gmail.com Fri Aug 31 20:56:34 2012 From: mwwiebe at gmail.com (Mark Wiebe) Date: Fri, 31 Aug 2012 17:56:34 -0700 Subject: [Numpy-discussion] How to debug reference counting errors In-Reply-To: References: <50409E73.5070900@astro.uio.no> Message-ID: On Fri, Aug 31, 2012 at 5:35 PM, Ond?ej ?ert?k wrote: > Hi Dag, > > On Fri, Aug 31, 2012 at 4:22 AM, Dag Sverre Seljebotn > wrote: > > On 08/31/2012 09:03 AM, Ond?ej ?ert?k wrote: > >> Hi, > >> > >> There is segfault reported here: > >> > >> http://projects.scipy.org/numpy/ticket/1588 > >> > >> I've managed to isolate the problem and even provide a simple patch, > >> that fixes it here: > >> > >> https://github.com/numpy/numpy/issues/398 > >> > >> however the patch simply doesn't decrease the proper reference, so it > >> might leak. I've used > >> bisection (took the whole evening unfortunately...) but the good news > >> is that I've isolated commits > >> that actually broke it. See the github issue #398 for details, diffs > etc. > >> > >> Unfortunately, it's 12 commits from Mark and the individual commits > >> raise exception on the segfaulting code, > >> so I can't pin point the problem further. > >> > >> In general, how can I debug this sort of problem? I tried to use > >> valgrind, with a debugging build of numpy, > >> but it provides tons of false (?) positives: > https://gist.github.com/3549063 > >> > >> Mark, by looking at the changes that broke it, as well as at my "fix", > >> do you see where the problem could be? > >> > >> I suspect it is something with the changes in PyArray_FromAny() or > >> PyArray_FromArray() in ctors.c. > >> But I don't see anything so far that could cause it. > >> > >> Thanks for any help. This is one of the issues blocking the 1.7.0 > release. > > > > IIRC you can recompile Python with some support for detecting memory > > leaks. One of the issues with using Valgrind, after suppressing the > > false positives, is that Python uses its own memory allocator so that > > sits between the bug and what Valgrind detects. So at least recompile > > Python to not do that. > > Right. Compiling with "--without-pymalloc" (per README.valgrind as > suggested > above by Richard) should improve things a lot. Thanks for the tip. > > > > > As for hardening the NumPy source in general, you should at least be > > aware of these two options: > > > > 1) David Malcolm (dmalcolm at redhat.com) was writing a static code > > analysis plugin for gcc that would check every routine that the > > reference count semantics was correct. (I don't know how far he's got > > with that.) > > > > 2) In Cython we have a "reference count nanny". This requires changes to > > all the code though, so not an option just for finding this bug, just > > thought I'd mention it. In addition to the INCREF/DECREF you need to > > insert new "GIVEREF" and "GOTREF" calls (which are noops in a normal > > compile) to declare where you get and give away a reference. When > > Cython-generated sources are enabled with -DCYTHON_REFNANNY, > > INCREF/DECREF/GIVEREF/GOTREF are tracked within each function and a > > failure is raised if the function violates any contract. > > I see. That's a nice option. For my own code, I never touch the > reference counting > by hand and rather just use Cython. > > > In the meantime, Mark fixed it: > > https://github.com/numpy/numpy/pull/400 > https://github.com/numpy/numpy/pull/405 > > Mark, thanks again for this. That saved me a lot of time. > No problem. The way I prefer to deal with this kind of error is use C++ smart pointers. C++11's unique_ptr and boost's intrusive_ptr are both useful for painlessly managing this kind of reference counting headache. -Mark > > Ondrej > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej.certik at gmail.com Fri Aug 31 21:05:32 2012 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Fri, 31 Aug 2012 18:05:32 -0700 Subject: [Numpy-discussion] How to debug reference counting errors In-Reply-To: References: <50409E73.5070900@astro.uio.no> Message-ID: On Fri, Aug 31, 2012 at 5:56 PM, Mark Wiebe wrote: > On Fri, Aug 31, 2012 at 5:35 PM, Ond?ej ?ert?k > wrote: >> >> Hi Dag, >> >> On Fri, Aug 31, 2012 at 4:22 AM, Dag Sverre Seljebotn >> wrote: >> > On 08/31/2012 09:03 AM, Ond?ej ?ert?k wrote: >> >> Hi, >> >> >> >> There is segfault reported here: >> >> >> >> http://projects.scipy.org/numpy/ticket/1588 >> >> >> >> I've managed to isolate the problem and even provide a simple patch, >> >> that fixes it here: >> >> >> >> https://github.com/numpy/numpy/issues/398 >> >> >> >> however the patch simply doesn't decrease the proper reference, so it >> >> might leak. I've used >> >> bisection (took the whole evening unfortunately...) but the good news >> >> is that I've isolated commits >> >> that actually broke it. See the github issue #398 for details, diffs >> >> etc. >> >> >> >> Unfortunately, it's 12 commits from Mark and the individual commits >> >> raise exception on the segfaulting code, >> >> so I can't pin point the problem further. >> >> >> >> In general, how can I debug this sort of problem? I tried to use >> >> valgrind, with a debugging build of numpy, >> >> but it provides tons of false (?) positives: >> >> https://gist.github.com/3549063 >> >> >> >> Mark, by looking at the changes that broke it, as well as at my "fix", >> >> do you see where the problem could be? >> >> >> >> I suspect it is something with the changes in PyArray_FromAny() or >> >> PyArray_FromArray() in ctors.c. >> >> But I don't see anything so far that could cause it. >> >> >> >> Thanks for any help. This is one of the issues blocking the 1.7.0 >> >> release. >> > >> > IIRC you can recompile Python with some support for detecting memory >> > leaks. One of the issues with using Valgrind, after suppressing the >> > false positives, is that Python uses its own memory allocator so that >> > sits between the bug and what Valgrind detects. So at least recompile >> > Python to not do that. >> >> Right. Compiling with "--without-pymalloc" (per README.valgrind as >> suggested >> above by Richard) should improve things a lot. Thanks for the tip. >> >> > >> > As for hardening the NumPy source in general, you should at least be >> > aware of these two options: >> > >> > 1) David Malcolm (dmalcolm at redhat.com) was writing a static code >> > analysis plugin for gcc that would check every routine that the >> > reference count semantics was correct. (I don't know how far he's got >> > with that.) >> > >> > 2) In Cython we have a "reference count nanny". This requires changes to >> > all the code though, so not an option just for finding this bug, just >> > thought I'd mention it. In addition to the INCREF/DECREF you need to >> > insert new "GIVEREF" and "GOTREF" calls (which are noops in a normal >> > compile) to declare where you get and give away a reference. When >> > Cython-generated sources are enabled with -DCYTHON_REFNANNY, >> > INCREF/DECREF/GIVEREF/GOTREF are tracked within each function and a >> > failure is raised if the function violates any contract. >> >> I see. That's a nice option. For my own code, I never touch the >> reference counting >> by hand and rather just use Cython. >> >> >> In the meantime, Mark fixed it: >> >> https://github.com/numpy/numpy/pull/400 >> https://github.com/numpy/numpy/pull/405 >> >> Mark, thanks again for this. That saved me a lot of time. > > > No problem. The way I prefer to deal with this kind of error is use C++ > smart pointers. C++11's unique_ptr and boost's intrusive_ptr are both useful > for painlessly managing this kind of reference counting headache. Oh yes. I prefer to use Trilinos' RCP, which is a shared pointer (just like in C++11), but has better debugging info if something goes wrong. It can be compiled in two modes -- one is slower and it can't segfault, and the other is optimized, most operations are at native raw pointer speed, but it can segfault. Ondrej