From cournape at gmail.com Sat Nov 1 01:10:38 2008 From: cournape at gmail.com (David Cournapeau) Date: Sat, 1 Nov 2008 14:10:38 +0900 Subject: [Numpy-discussion] Complete LAPACK needed (Frank Lagor) In-Reply-To: <9fddf64a0810311242i451df222xf4026aa74f74b7f7@mail.gmail.com> References: <9fddf64a0810311242i451df222xf4026aa74f74b7f7@mail.gmail.com> Message-ID: <5b8d13220810312210n57e55b75y1377783f1e5ca6f2@mail.gmail.com> On Sat, Nov 1, 2008 at 4:42 AM, Frank Lagor wrote: > ImportError: liblapack.so: cannot open shared object file: No such file or > directory Where did you install atlas ? What does ldd says about lapack_lite.so module ? More likely, your problem is that you install atlas into a directory not known to the OS. You should either install atlas in a directory known to it, or use things like LD_LIBRARY_PATH. For example, if you installed atlas in $HOME/local/lib: LD_LIBRARY_PATH=$HOME/local/lib python -c "import numpy" David From robert.kern at gmail.com Sat Nov 1 04:07:44 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 1 Nov 2008 03:07:44 -0500 Subject: [Numpy-discussion] Simplifying compiler optimization flags logic (fortran compilers) In-Reply-To: <490ADD1B.9050207@ar.media.kyoto-u.ac.jp> References: <490ADD1B.9050207@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730811010107m110c0eaendaf5a1149fdce356@mail.gmail.com> On Fri, Oct 31, 2008 at 05:25, David Cournapeau wrote: > Hi, > > I was wondering whether it was really worth having a lot of magic > going on in fcompilers for flags like -msse2 and co (everything done in > get_flags_arch, for example). It is quite fragile (we had several > problems wrt buggy compilers, buggy CPU detection), and I am not sure it > buys us much anyway. Did some people notice a difference between > gfortran -O3 -msse2 and gfortran -O3 ? You're probably right. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dfranci at seas.upenn.edu Sat Nov 1 10:20:32 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Sat, 1 Nov 2008 10:20:32 -0400 Subject: [Numpy-discussion] Complete LAPACK needed (Frank Lagor) In-Reply-To: <9fddf64a0810311242i451df222xf4026aa74f74b7f7@mail.gmail.com> References: <9fddf64a0810311242i451df222xf4026aa74f74b7f7@mail.gmail.com> Message-ID: <9fddf64a0811010720i23dc41ffvc7409a0c122f5e1e@mail.gmail.com> Problem solved-- This posting is just to complete the thread to document it if others have similar issues. The previous error: ImportError: liblapack.so: cannot open shared object file: No such file or directory was solved simply by checking that the environment variables were set properly. I set BLAS, LAPACK, and LD_LIBRARY_PATH variables (I'm not sure if the LD one was needed, but this worked) The next issue I ran into was that the error message changed to "undefined symbol _dgesdd", which I googled and discovered was a routine for singular value decomposition that was missing-- meaning that my liblapack.so file was incomplete. I checked this by doing $ strings liblapack.so | grep dgesdd and noticing that the output was blank. The output was not blank however in the lapack _LINUX.a install from the netlib lapack, so that meant that my ATLAS install was incorrect. I changed the way that I specified the configure for ATLAS. I had to use the --with-netlib-lapack option (as stated in one of the installation files) instead of the "-Ss flapack pathname" that is specified in the command line help. Finally, the install worked. -Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sat Nov 1 10:10:36 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 01 Nov 2008 23:10:36 +0900 Subject: [Numpy-discussion] Complete LAPACK needed (Frank Lagor) In-Reply-To: <9fddf64a0811010720i23dc41ffvc7409a0c122f5e1e@mail.gmail.com> References: <9fddf64a0810311242i451df222xf4026aa74f74b7f7@mail.gmail.com> <9fddf64a0811010720i23dc41ffvc7409a0c122f5e1e@mail.gmail.com> Message-ID: <490C635C.40205@ar.media.kyoto-u.ac.jp> Frank Lagor wrote: > Problem solved-- This posting is just to complete the thread to > document it if others have similar issues. Glad you could solve it. > > The previous error: > ImportError: liblapack.so: cannot open shared object file: No such > file or directory > > was solved simply by checking that the environment variables were set > properly. I set BLAS, LAPACK, and LD_LIBRARY_PATH variables (I'm not > sure if the LD one was needed, but this worked) Yes, the LD_ is necessary (LD stands for loader). > > The next issue I ran into was that the error message changed to > "undefined symbol _dgesdd", > which I googled and discovered was a routine for singular value > decomposition that was missing-- meaning that my liblapack.so file was > incomplete. I checked this by doing > $ strings liblapack.so | grep dgesdd Note that strings is not really the best way to check that: nm is. strings only look for strings in the binary, and does not look for symbols (e.g. you could find a string which does not correspond to a function the loader and/or the linker can find). > and noticing that the output was blank. The output was not blank > however in the lapack _LINUX.a install from the netlib lapack, so that > meant that my ATLAS install was incorrect. I changed the way that I > specified the configure for ATLAS. I had to use the > --with-netlib-lapack option (as stated in one of the installation > files) instead of the "-Ss flapack pathname" Ah yes, I could never manage to make this option work either. I always use --with-netlib-lapack when I need to build a custom ATLAS. cheers, David From millman at berkeley.edu Sat Nov 1 13:15:29 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 1 Nov 2008 10:15:29 -0700 Subject: [Numpy-discussion] Simplifying compiler optimization flags logic (fortran compilers) In-Reply-To: <3d375d730811010107m110c0eaendaf5a1149fdce356@mail.gmail.com> References: <490ADD1B.9050207@ar.media.kyoto-u.ac.jp> <3d375d730811010107m110c0eaendaf5a1149fdce356@mail.gmail.com> Message-ID: On Sat, Nov 1, 2008 at 1:07 AM, Robert Kern wrote: > On Fri, Oct 31, 2008 at 05:25, David Cournapeau > wrote: >> I was wondering whether it was really worth having a lot of magic >> going on in fcompilers for flags like -msse2 and co (everything done in >> get_flags_arch, for example). It is quite fragile (we had several >> problems wrt buggy compilers, buggy CPU detection), and I am not sure it >> buys us much anyway. Did some people notice a difference between >> gfortran -O3 -msse2 and gfortran -O3 ? > > You're probably right. I think it is probably best to take out some of the magic in fcompilers as well. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From dfranci at seas.upenn.edu Sat Nov 1 13:30:57 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Sat, 1 Nov 2008 13:30:57 -0400 Subject: [Numpy-discussion] Complete LAPACK needed (Frank Lagor) Message-ID: <9fddf64a0811011030t92d10f2q8419b3348821bd10@mail.gmail.com> Thanks so much for your help, David. I'm sorry I did not receive your posts previously -- I have the digest mode on and there is a bit of a delay. I'll try to change my options next time I post a request. Thanks so much again, Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.abshoff at googlemail.com Sat Nov 1 15:09:31 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Sat, 01 Nov 2008 12:09:31 -0700 Subject: [Numpy-discussion] Simplifying compiler optimization flags logic (fortran compilers) In-Reply-To: References: <490ADD1B.9050207@ar.media.kyoto-u.ac.jp> <3d375d730811010107m110c0eaendaf5a1149fdce356@mail.gmail.com> Message-ID: <490CA96B.4080803@gmail.com> Jarrod Millman wrote: > On Sat, Nov 1, 2008 at 1:07 AM, Robert Kern wrote: Hi, >> On Fri, Oct 31, 2008 at 05:25, David Cournapeau >> wrote: >>> I was wondering whether it was really worth having a lot of magic >>> going on in fcompilers for flags like -msse2 and co (everything done in >>> get_flags_arch, for example). It is quite fragile (we had several >>> problems wrt buggy compilers, buggy CPU detection), and I am not sure it >>> buys us much anyway. Did some people notice a difference between >>> gfortran -O3 -msse2 and gfortran -O3 ? >> You're probably right. we removed setting the various SSE flags in Sage's numpy install because they caused segfaults when using gfortran. I don't think that there is a significant performance difference with SSE for that code because we use Lapack and ATLAS build with SSE when it is available. > I think it is probably best to take out some of the magic in fcompilers as well. > Cheers, Michael From abhimanyulad at gmail.com Sun Nov 2 13:24:37 2008 From: abhimanyulad at gmail.com (Abhimanyu Lad) Date: Sun, 2 Nov 2008 18:24:37 +0000 (UTC) Subject: [Numpy-discussion] numpy function to compute sample ranks Message-ID: Hi, Is there a direct or indirect way in numpy to compute the sample ranks of a given array, i.e. the equivalent of rank() in R. I am looking for: rank(array([6,8,4,1,9])) -> array([2,3,1,0,4]) Is there some clever use of argsort() that I am missing? Thanks, Abhi From kwgoodman at gmail.com Sun Nov 2 13:35:11 2008 From: kwgoodman at gmail.com (Keith Goodman) Date: Sun, 2 Nov 2008 10:35:11 -0800 Subject: [Numpy-discussion] numpy function to compute sample ranks In-Reply-To: References: Message-ID: On Sun, Nov 2, 2008 at 10:24 AM, Abhimanyu Lad wrote: > Is there a direct or indirect way in numpy to compute the sample ranks of a > given array, i.e. the equivalent of rank() in R. > > I am looking for: > rank(array([6,8,4,1,9])) -> array([2,3,1,0,4]) > > Is there some clever use of argsort() that I am missing? If there are no NaNs and no ties, then: >> x = np.array([6,8,4,1,9]) >> x.argsort().argsort() array([2, 3, 1, 0, 4]) If you have ties, then scipy has a ranking function. From paul at rudin.co.uk Sun Nov 2 14:23:06 2008 From: paul at rudin.co.uk (Paul Rudin) Date: Sun, 02 Nov 2008 19:23:06 +0000 Subject: [Numpy-discussion] computing average distance Message-ID: <874p2qrkyt.fsf@rudin.co.uk> I'm experimenting with numpy and I've just written the code below, which computes the thing I want (I think). Self.bits is an RxRxR array representing a voxelized 3d model - values are either 0 or 1. I can't help thinking that's there must be a much nicer way to do it. Any suggestions? centre = numpy.array(scipy.ndimage.measurements.center_of_mass(self.bits)) vectors = [] for x in xrange(R): for y in xrange(R): for z in xrange(R): if self.bits[x,y,z]: vectors.append([x,y,z]) vectors = numpy.array(vectors) distances = numpy.sqrt(numpy.sum((vectors-centre) ** 2.0, axis=1)) av_dist = numpy.average(distances) From wesmckinn at gmail.com Sun Nov 2 14:37:30 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Sun, 2 Nov 2008 14:37:30 -0500 Subject: [Numpy-discussion] numpy function to compute sample ranks In-Reply-To: References: Message-ID: <5D71FC6F-8699-4DAB-9EB0-88293E42004A@gmail.com> Try rankdata in scipy.stats On Nov 2, 2008, at 1:35 PM, Keith Goodman wrote: > On Sun, Nov 2, 2008 at 10:24 AM, Abhimanyu Lad > wrote: >> Is there a direct or indirect way in numpy to compute the sample >> ranks of a >> given array, i.e. the equivalent of rank() in R. >> >> I am looking for: >> rank(array([6,8,4,1,9])) -> array([2,3,1,0,4]) >> >> Is there some clever use of argsort() that I am missing? > > If there are no NaNs and no ties, then: > >>> x = np.array([6,8,4,1,9]) >>> x.argsort().argsort() > array([2, 3, 1, 0, 4]) > > If you have ties, then scipy has a ranking function. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From emmanuelle.gouillart at normalesup.org Sun Nov 2 14:39:39 2008 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Sun, 2 Nov 2008 20:39:39 +0100 Subject: [Numpy-discussion] computing average distance In-Reply-To: <874p2qrkyt.fsf@rudin.co.uk> References: <874p2qrkyt.fsf@rudin.co.uk> Message-ID: <20081102193939.GE20174@phare.normalesup.org> Hello Paul, although I'm not an expert either, it seems to me you could improve your code a lot by using numpy.mgrid Below is a short example of what you could do coordinates = numpy.mgrid[0:R, 0:R, 0:R] X, Y, Z = coordinates[0].ravel(), coordinates[1].ravel(),coordinates[2].ravel() bits = self.bits.ravel() distances = numpy.sqrt((X[bits==1]-centre[0])**2 + (Y[bits==1]-centre[0])**2 + (Z[bits==1]-centre[0])**2) There must be a way to do it without flattening the arrays, but I haven't found it. Anyway, you can surely do what you want without a loop! Cheers, Emmanuelle On Sun, Nov 02, 2008 at 07:23:06PM +0000, Paul Rudin wrote: > I'm experimenting with numpy and I've just written the code below, which > computes the thing I want (I think). Self.bits is an RxRxR array > representing a voxelized 3d model - values are either 0 or 1. I can't > help thinking that's there must be a much nicer way to do it. Any > suggestions? > centre = numpy.array(scipy.ndimage.measurements.center_of_mass(self.bits)) > vectors = [] > for x in xrange(R): > for y in xrange(R): > for z in xrange(R): > if self.bits[x,y,z]: > vectors.append([x,y,z]) > vectors = numpy.array(vectors) > distances = numpy.sqrt(numpy.sum((vectors-centre) ** 2.0, axis=1)) > av_dist = numpy.average(distances) > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From tjhnson at gmail.com Sun Nov 2 14:34:38 2008 From: tjhnson at gmail.com (T J) Date: Sun, 2 Nov 2008 11:34:38 -0800 Subject: [Numpy-discussion] Installation Trouble In-Reply-To: References: Message-ID: Sorry....wrong list. On Sun, Nov 2, 2008 at 11:34 AM, T J wrote: > Hi, > > I'm having trouble installing PyUblas 0.93.1 (same problems from the > current git repository). I'm in ubuntu 8.04 with standard boost > packages (1.34.1, I believe). Do you have any suggestions? > > Thanks! > From gael.varoquaux at normalesup.org Sun Nov 2 15:41:20 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 2 Nov 2008 21:41:20 +0100 Subject: [Numpy-discussion] computing average distance In-Reply-To: <20081102193939.GE20174@phare.normalesup.org> References: <874p2qrkyt.fsf@rudin.co.uk> <20081102193939.GE20174@phare.normalesup.org> Message-ID: <20081102204120.GB28969@phare.normalesup.org> Hey Emmanuelle, ( :>) On Sun, Nov 02, 2008 at 08:39:39PM +0100, Emmanuelle Gouillart wrote: > although I'm not an expert either, it seems to me you could improve your > code a lot by using numpy.mgrid > Below is a short example of what you could do > coordinates = numpy.mgrid[0:R, 0:R, 0:R] > X, Y, Z = coordinates[0].ravel(), coordinates[1].ravel(),coordinates[2].ravel() bits = self.bits.ravel() > distances = numpy.sqrt((X[bits==1]-centre[0])**2 + > (Y[bits==1]-centre[0])**2 + (Z[bits==1]-centre[0])**2) > There must be a way to do it without flattening the arrays, but I haven't > found it. Anyway, you can surely do what you want without a loop! A few cosmetic comments: this code is very good, but it can be slightly improved. First, you can use numpy.indices instead of numpy.mgrid for what you want to do (see http://docs.scipy.org/doc/numpy/reference/generated/numpy.indices.html#numpy.indices ): numpy.mgrid[0:R, 0:R, 0:R] == numpy.indices((R, R, R) Second, I don't like to use explicit indices [0], [1], [2] for x, y, z. It is so easy to get it wrong. The good news is that you can do better with Python: * eg define the center coordinnates as so: x0, y0, z0 = numpy.array(scipy.ndimage.measurements.center_of_mass(self.bits)) this will avoid what seems to be an error in Emmanuelle's answer. By the way, I am not too sure why Paul has used a numpy.array here. For this part of the code, it is not terribly usefull. * Same thing for the call to mgrid: X, Y, Z = numpy.indices((R, R, R) Finally, when you create the raveled bits you are already making a copy, so you might as well define a variable called 'mask' which is equal to: mask = (self.bits.ravel() == 1) this will make the code more readable. And lastly, if the shape of bits it (R, R, R), you don't need to ravel. With all these comments (supposing the shape of bits is (R, R, R), the final code would look like: x0, y0, z0 = scipy.ndimage.measurements.center_of_mass(self.bits) X, Y, Z = numpy.indices((R, R, R)) mask = (self.bits == 1) distances = numpy.sqrt( (X[mask] - x0)**2 + (Y[mask] - y0)**2 + (Z[mask] - z0)**2 ) HTH, Ga?l From simpson at math.toronto.edu Sun Nov 2 16:02:38 2008 From: simpson at math.toronto.edu (Gideon Simpson) Date: Sun, 2 Nov 2008 16:02:38 -0500 Subject: [Numpy-discussion] fink python26 and numpy 1.2.1 Message-ID: <26834DBA-F09E-48F1-B299-9ADB01967741@math.toronto.edu> Not sure if this is an issue with numpy or an issue with fink python 2.6, but when trying to build numpy, I get the following error: gcc -L/sw/lib -bundle /sw/lib/python2.6/config -lpython2.6 build/ temp.macosx-10.5-i386-2.6/numpy/core/src/multiarraymodule.o -o build/ lib.macosx-10.5-i386-2.6/numpy/core/multiarray.so ld: library not found for -lpython2.6 collect2: ld returned 1 exit status ld: library not found for -lpython2.6 collect2: ld returned 1 exit status error: Command "gcc -L/sw/lib -bundle /sw/lib/python2.6/config - lpython2.6 build/temp.macosx-10.5-i386-2.6/numpy/core/src/ multiarraymodule.o -o build/lib.macosx-10.5-i386-2.6/numpy/core/ multiarray.so" failed with exit status 1 -gideon From michael.abshoff at googlemail.com Sun Nov 2 16:14:36 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Sun, 02 Nov 2008 13:14:36 -0800 Subject: [Numpy-discussion] fink python26 and numpy 1.2.1 In-Reply-To: <26834DBA-F09E-48F1-B299-9ADB01967741@math.toronto.edu> References: <26834DBA-F09E-48F1-B299-9ADB01967741@math.toronto.edu> Message-ID: <490E183C.80201@gmail.com> Gideon Simpson wrote: > Not sure if this is an issue with numpy or an issue with fink python > 2.6, but when trying to build numpy, I get the following error: Unfortunately numpy 1.2.x does not support Python 2.6. IIRC support is planned for numpy 1.3. > gcc -L/sw/lib -bundle /sw/lib/python2.6/config -lpython2.6 build/ > temp.macosx-10.5-i386-2.6/numpy/core/src/multiarraymodule.o -o build/ > lib.macosx-10.5-i386-2.6/numpy/core/multiarray.so > ld: library not found for -lpython2.6 > collect2: ld returned 1 exit status > ld: library not found for -lpython2.6 > collect2: ld returned 1 exit status > error: Command "gcc -L/sw/lib -bundle /sw/lib/python2.6/config - > lpython2.6 build/temp.macosx-10.5-i386-2.6/numpy/core/src/ > multiarraymodule.o -o build/lib.macosx-10.5-i386-2.6/numpy/core/ > multiarray.so" failed with exit status 1 > > > -gideon Cheers, Michael > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From paul at rudin.co.uk Mon Nov 3 00:52:30 2008 From: paul at rudin.co.uk (Paul Rudin) Date: Mon, 03 Nov 2008 05:52:30 +0000 Subject: [Numpy-discussion] computing average distance References: <874p2qrkyt.fsf@rudin.co.uk> <20081102193939.GE20174@phare.normalesup.org> <20081102204120.GB28969@phare.normalesup.org> Message-ID: <87d4hd5pb5.fsf@rudin.co.uk> Gael Varoquaux writes: > Hey Emmanuelle, ( :>) > > On Sun, Nov 02, 2008 at 08:39:39PM +0100, Emmanuelle Gouillart wrote: Thanks both of you. > this will avoid what seems to be an error in Emmanuelle's answer. By > the way, I am not too sure why Paul has used a numpy.array here. For > this part of the code, it is not terribly usefull. > I'd used array because of what I do next. Having found the average distance from the centre I want to scale the image so that the average weight is about R/2, and then move the image so that it's centred around the middle of the space. This is quite slow too... scale_factor = (R/2)/av_dist # scale so that our average distance from the centre is R/2 def transformer(point): return tuple(((numpy.array(point) - centre) * scale_factor) + centre) self.bits = scipy.ndimage.geometric_transform(self.bits, transformer) # move so that we're centred in the middle of the voxelspace self.bits = scipy.ndimage.shift(self.bits, midpoint-centre) From david at ar.media.kyoto-u.ac.jp Mon Nov 3 01:35:41 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 03 Nov 2008 15:35:41 +0900 Subject: [Numpy-discussion] fink python26 and numpy 1.2.1 In-Reply-To: <490E183C.80201@gmail.com> References: <26834DBA-F09E-48F1-B299-9ADB01967741@math.toronto.edu> <490E183C.80201@gmail.com> Message-ID: <490E9BBD.5010200@ar.media.kyoto-u.ac.jp> Michael Abshoff wrote: > > Unfortunately numpy 1.2.x does not support Python 2.6. IIRC support is > planned for numpy 1.3. > Also it is true it is not supported, it should at least build on most if not all platforms where numpy used to run under python 2.5. Not finding -lpython2.6 is more likely a bug/installation problem from fink (unless you are ready to deal with multiple version problems of python, I would advice against using fink: it makes it difficult to be sure there are no conflict between system python, fink python and python.org python. It is not impossible, of course, but that complicates matters a lot in my own experience). cheers, David From bsouthey at gmail.com Mon Nov 3 09:44:19 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 03 Nov 2008 08:44:19 -0600 Subject: [Numpy-discussion] Simplifying compiler optimization flags logic (fortran compilers) In-Reply-To: <490CA96B.4080803@gmail.com> References: <490ADD1B.9050207@ar.media.kyoto-u.ac.jp> <3d375d730811010107m110c0eaendaf5a1149fdce356@mail.gmail.com> <490CA96B.4080803@gmail.com> Message-ID: <490F0E43.9090107@gmail.com> Michael Abshoff wrote: > Jarrod Millman wrote: > >> On Sat, Nov 1, 2008 at 1:07 AM, Robert Kern wrote: >> > > Hi, > > >>> On Fri, Oct 31, 2008 at 05:25, David Cournapeau >>> wrote: >>> >>>> I was wondering whether it was really worth having a lot of magic >>>> going on in fcompilers for flags like -msse2 and co (everything done in >>>> get_flags_arch, for example). It is quite fragile (we had several >>>> problems wrt buggy compilers, buggy CPU detection), and I am not sure it >>>> buys us much anyway. Did some people notice a difference between >>>> gfortran -O3 -msse2 and gfortran -O3 ? >>>> >>> You're probably right. >>> > > we removed setting the various SSE flags in Sage's numpy install because > they caused segfaults when using gfortran. I don't think that there is a > significant performance difference with SSE for that code because we use > Lapack and ATLAS build with SSE when it is available. > > >> I think it is probably best to take out some of the magic in fcompilers as well. >> >> > > > Cheers, > > Michael > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > Hi, I just wanted to point out that the man page on Linux and the GCC manual for i386 and x86 options (http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/i386-and-x86_002d64-Options.html#i386-and-x86_002d64-Options) states: "For the i386 compiler, you need to use -march=cpu-type, -msse or -msse2 switches to enable SSE extensions and make this option effective. For the x86-64 compiler, these extensions are enabled by default." "This is the default choice for the x86-64 compiler." While this is still a relatively few proportion of processors and I do not know when GCC started this, the sse flags should be redundant and thus removed as more people use x86_64 processors. Bruce From simpson at math.toronto.edu Mon Nov 3 12:06:35 2008 From: simpson at math.toronto.edu (Gideon Simpson) Date: Mon, 3 Nov 2008 12:06:35 -0500 Subject: [Numpy-discussion] fink python26 and numpy 1.2.1 In-Reply-To: <490E9BBD.5010200@ar.media.kyoto-u.ac.jp> References: <26834DBA-F09E-48F1-B299-9ADB01967741@math.toronto.edu> <490E183C.80201@gmail.com> <490E9BBD.5010200@ar.media.kyoto-u.ac.jp> Message-ID: <74E989D1-5FB9-40BA-8519-E363F7F780FC@math.toronto.edu> The fink guys fixed a bug so it now at least builds properly with python 2.6. -gideon On Nov 3, 2008, at 1:35 AM, David Cournapeau wrote: > Michael Abshoff wrote: >> >> Unfortunately numpy 1.2.x does not support Python 2.6. IIRC support >> is >> planned for numpy 1.3. >> > > Also it is true it is not supported, it should at least build on > most if > not all platforms where numpy used to run under python 2.5. > > Not finding -lpython2.6 is more likely a bug/installation problem from > fink (unless you are ready to deal with multiple version problems of > python, I would advice against using fink: it makes it difficult to be > sure there are no conflict between system python, fink python and > python.org python. It is not impossible, of course, but that > complicates > matters a lot in my own experience). > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From charlesr.harris at gmail.com Mon Nov 3 12:48:47 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 3 Nov 2008 10:48:47 -0700 Subject: [Numpy-discussion] computing average distance In-Reply-To: <874p2qrkyt.fsf@rudin.co.uk> References: <874p2qrkyt.fsf@rudin.co.uk> Message-ID: On Sun, Nov 2, 2008 at 12:23 PM, Paul Rudin wrote: > > I'm experimenting with numpy and I've just written the code below, which > computes the thing I want (I think). Self.bits is an RxRxR array > representing a voxelized 3d model - values are either 0 or 1. I can't > help thinking that's there must be a much nicer way to do it. Any > suggestions? > > > centre = numpy.array(scipy.ndimage.measurements.center_of_mass(self.bits)) > > vectors = [] > for x in xrange(R): > for y in xrange(R): > for z in xrange(R): > if self.bits[x,y,z]: > vectors.append([x,y,z]) > > vectors = numpy.array(vectors) > distances = numpy.sqrt(numpy.sum((vectors-centre) ** 2.0, axis=1)) > av_dist = numpy.average(distances) > Try nonzero: In [5]: bits = np.random.random_integers(0,1, size=(3,3,3)) In [6]: vectors = nonzero(bits) In [7]: vectors Out[7]: (array([0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2]), array([0, 0, 0, 1, 1, 2, 2, 0, 0, 1, 2, 2, 0, 0, 1, 1, 2, 2]), array([0, 1, 2, 0, 1, 0, 2, 0, 2, 0, 1, 2, 0, 1, 0, 2, 0, 2])) The arrays three arrays contain the x, y, z indices. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjhnson at gmail.com Mon Nov 3 13:46:37 2008 From: tjhnson at gmail.com (T J) Date: Mon, 3 Nov 2008 10:46:37 -0800 Subject: [Numpy-discussion] atlas not found, why? Message-ID: Numpy doesn't seem to be finding my atlas install. Have I done something wrong or misunderstood? $ cd /usr/lib $ ls libatlas* libatlas.a libatlas.so libatlas.so.3gf libatlas.so.3gf.0 $ ls libf77* libf77blas.a libf77blas.so libf77blas.so.3gf libf77blas.so.3gf.0 $ ls libcblas* libcblas.a libcblas.so libcblas.so.3gf libcblas.so.3gf.0 $ ls liblapack* liblapack-3.a liblapack.a liblapack_atlas.so liblapack_atlas.so.3gf.0 liblapackgf-3.so liblapack.so.3gf liblapack-3.so liblapack_atlas.a liblapack_atlas.so.3gf liblapackgf-3.a liblapack.so liblapack.so.3gf.0 Since these are all in the standard locations, I am building without a site.cfg. Here is the beginning info: Running from numpy source directory. non-existing path in 'numpy/distutils': 'site.cfg' F2PY Version 2_5972 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS NOT AVAILABLE atlas_blas_info: NOT AVAILABLE /tmp/numpy/numpy/distutils/system_info.py:1340: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: libraries blas not found in /usr/local/lib FOUND: libraries = ['blas'] library_dirs = ['/usr/lib'] language = f77 FOUND: libraries = ['blas'] library_dirs = ['/usr/lib'] define_macros = [('NO_ATLAS_INFO', 1)] language = f77 lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: numpy.distutils.system_info.atlas_info NOT AVAILABLE /tmp/numpy/numpy/distutils/system_info.py:1247: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) lapack_info: libraries lapack not found in /usr/local/lib FOUND: libraries = ['lapack'] library_dirs = ['/usr/lib'] language = f77 FOUND: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib'] define_macros = [('NO_ATLAS_INFO', 1)] language = f77 From forand at gmail.com Mon Nov 3 16:10:13 2008 From: forand at gmail.com (Brian) Date: Mon, 3 Nov 2008 16:10:13 -0500 Subject: [Numpy-discussion] Compile error during 32 bit compile on 64 bit machine Message-ID: <61D8FD8C-5C4B-4DAF-AC9F-BA73069D50FA@gmail.com> Greeting, I am trying to install numpy on the same system into to different prefixes, one dedicated to 32 bit stuff and the other to 64 bit. The machine is a dual CPU Core 2 Duo machine, that is a 64 bit native machine. Getting the 64 bit version to work is no problem. But cross compiling numpy for 32 bit binaries is very troublesome. I have compiled all the dependences using "-m32" including Python itself. All my other packages compile using the -m32 flag without problems. I have set CFLAGS, LDFLAGS, BASECFLAGS, CPPFLAGS, and CXXFLAGS all to reflect the change in bits to use. Please find below the error I receive when compiling with: python setup.py build --fcompiler=gnu95. The error appears to be because it is trying to link _configtest.o without the -m32 flag and thinks it is making a 64 bit binary. If I edit line 378 of ccompiler.py to add "-m32" to the link_opts by hand I can get past this error to a new one which indicates that main was never declared in the linalg package. If that is the more useful starting point I can post those errors as well. Any help would be greatly appreciated. Regards, Brian ############### ERROR BELOW ##################### C compiler: gcc -pthread -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict- prototypes -m32 -m32 -fPIC compile options: '-I/rh5stuff/32bit/include/python2.6 -Inumpy/core/src -Inumpy/core/include -I/rh5stuff/32bit/include/python2.6 -c' gcc: _configtest.c gcc -pthread _configtest.o -L/rh5stuff/32bit/lib -L/usr/local/lib -L/ usr/lib -o _configtest /usr/bin/ld: skipping incompatible /usr/lib/libpthread.so when searching for -lpthread /usr/bin/ld: skipping incompatible /usr/lib/libpthread.a when searching for -lpthread /usr/bin/ld: skipping incompatible /usr/lib/libc.so when searching for -lc /usr/bin/ld: skipping incompatible /usr/lib/libc.a when searching for - lc /usr/bin/ld: warning: i386 architecture of input file `_configtest.o' is incompatible with i386:x86-64 output _configtest failure. removing: _configtest.c _configtest.o _configtest From tjhnson at gmail.com Mon Nov 3 17:59:15 2008 From: tjhnson at gmail.com (T J) Date: Mon, 3 Nov 2008 14:59:15 -0800 Subject: [Numpy-discussion] atlas not found, why? In-Reply-To: References: Message-ID: On Mon, Nov 3, 2008 at 10:46 AM, T J wrote: > > Since these are all in the standard locations, I am building without a > site.cfg. Here is the beginning info: > Apparently, this is not enough. Only if I also set the ATLAS environment variable am I able to get this working as expected. So while I now have this working, I'd still like to understand why this is necessary. This is for Ubuntu 8.10. My previous experience was that no site.cfg was necessary. This is still the case, but you also need ATLAS defined. I was also expecting that the following site.cfg would avoid requiring ATLAS to be defined, but it did not. [DEFAULT] library_dirs = /usr/lib:/usr/lib/sse2 include_dirs = /usr/local/include [blas_opt] libraries = f77blas, cblas, atlas [lapack_opt] libraries = lapack, f77blas, cblas, atlas So can someone explain why I *must* define ATLAS. I tried a number of variations on site.cfg and could not get numpy to find atlas with any of them. From david at ar.media.kyoto-u.ac.jp Mon Nov 3 23:33:22 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 04 Nov 2008 13:33:22 +0900 Subject: [Numpy-discussion] Compile error during 32 bit compile on 64 bit machine In-Reply-To: <61D8FD8C-5C4B-4DAF-AC9F-BA73069D50FA@gmail.com> References: <61D8FD8C-5C4B-4DAF-AC9F-BA73069D50FA@gmail.com> Message-ID: <490FD092.3030906@ar.media.kyoto-u.ac.jp> Brian wrote: > Greeting, > > > I am trying to install numpy on the same system into to different > prefixes, one dedicated to 32 bit stuff and the other to 64 bit. The > machine is a dual CPU Core 2 Duo machine, that is a 64 bit native > machine. Getting the 64 bit version to work is no problem. But cross > compiling numpy for 32 bit binaries is very troublesome. I don't think it is possible to cross compile numpy (or any extension based on distutils for that matter) without bypassing the build system and building it by yourself. Distutils does not support cross compilation at all (except on windows for 32 vs 64 bits). How did you build your 32 bits python on the 64 bits OS ? cheers, David From matthieu.brucher at gmail.com Tue Nov 4 04:34:12 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 4 Nov 2008 10:34:12 +0100 Subject: [Numpy-discussion] passing a C array to embedded Python fromC code In-Reply-To: <7EFBEC7FA86C1141B59B59EEAEE3294FBC38AE@EMAIL2.exchange.electric.net> References: <7EFBEC7FA86C1141B59B59EEAEE3294FBC38AE@EMAIL2.exchange.electric.net> Message-ID: Hi, I've translated it on my blog (http://matt.eifelle.com/) and published it this morning. Matthieu 2008/10/30 Anthony Floyd : > Hi Chris, > >> Matthieu Brucher wrote: >> > If you can >> > follow a French tutorial, you can go on >> > >> http://matthieu-brucher.developpez.com/tutoriels/python/swig-numpy/#LV >> > to have a skeletton for your issue. >> >> That looks very useful -- any chance of an English >> translation? My one >> year of high school French is proving useless. Otherwise, the code >> itself is still quite helpful. >> > > Google does a pretty good job on this one: > > http://translate.google.com/translate?u=http%3A%2F%2Fmatthieu-brucher.developpez.com%2Ftutoriels%2Fpython%2Fswig-numpy%2F%23LV&sl=fr&tl=en&hl=en&ie=UTF-8 > > Anthony. > > -- > Anthony Floyd, PhD > Convergent Manufacturing Technologies Inc. > 6190 Agronomy Rd, Suite 403 > Vancouver BC V6T 1Z3 > CANADA > > Email: Anthony.Floyd at convergent.ca | Tel: 604-822-9682 x102 > WWW: http://www.convergent.ca | Fax: 604-822-9659 > > CMT is hiring: See http://www.convergent.ca for details > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From Giovanni.Samaey at cs.kuleuven.be Tue Nov 4 05:10:26 2008 From: Giovanni.Samaey at cs.kuleuven.be (Giovanni Samaey) Date: Tue, 4 Nov 2008 11:10:26 +0100 Subject: [Numpy-discussion] extension module with swig Message-ID: <1E09D27C-C79E-4517-8E64-4B04516F4484@cs.kuleuven.be> Dear all, I am unsure about the correct place to put this question -- If this isn't the correct list, please let me know which place is more appropriate. I am trying to build an extension module in python that calls a C routine that depends on the GNU Scientific Library. I am using swig to wrap the c-file and the numpy.i file that is included with numpy for the typemaps. To build the module, I am using distutils, with a modified version of the setup.py file that was included with the examples (in order to link against gsl). Although the build succeeds wtihout problems, I am getting "undefined symbol" errors from gsl when importing the module in python. I am attaching the source code of the simplest possible example below together with the error messages. Note that I am trying to link against my own gsl in my home directory. There is no system-wide one available. I get the feeling that I am doing something silly, but I am unable to put the finger on it. Any help would be greatly appreciated. As a c-file I have : test.c #include #include # include double var; void test(double *x, double *y, double *z, int n) { int i; const gsl_rng_type *T; gsl_rng *r; // gsl_rng_env_setup(); // T = gsl_rng_default; T = gsl_rng_mt19937; r = gsl_rng_alloc(T); for (i=0;i", line 1, in ? File "test.py", line 7, in ? import _test ImportError: ./_test.so: undefined symbol: gsl_rng_mt19937 From matthieu.brucher at gmail.com Tue Nov 4 05:24:40 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 4 Nov 2008 11:24:40 +0100 Subject: [Numpy-discussion] extension module with swig In-Reply-To: <1E09D27C-C79E-4517-8E64-4B04516F4484@cs.kuleuven.be> References: <1E09D27C-C79E-4517-8E64-4B04516F4484@cs.kuleuven.be> Message-ID: > # dot extension module > _test = Extension("_test", > ["test_wrap.c", > "test.c"], > include_dirs = [numpy_include,'/data/home/ > u0038151/include'], > library_dirs = ['/data/home/u0038151/lib'] > ) > > # NumyTypemapTests setup > setup(name = ["test"], > description = "test c module swigging", > author = "Giovanni Samaey", > py_modules = ["test"], > ext_modules = [_test ] > ) > > The error message is the following: > > 0 u0038151 at lo-03-02 dot2 $ python -c "import test" > Traceback (most recent call last): > File "", line 1, in ? > File "test.py", line 7, in ? > import _test > ImportError: ./_test.so: undefined symbol: gsl_rng_mt19937 Hi, Where did you add the gsl library when building the extension ? I think the Extension class is missing an argument (library = ["gsl"] or something like that) Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From giovanni.samaey at cs.kuleuven.be Tue Nov 4 05:46:14 2008 From: giovanni.samaey at cs.kuleuven.be (Giovanni Samaey) Date: Tue, 4 Nov 2008 11:46:14 +0100 Subject: [Numpy-discussion] extension module with swig In-Reply-To: References: <1E09D27C-C79E-4517-8E64-4B04516F4484@cs.kuleuven.be> Message-ID: <5D56FECF-2E36-46B5-94FF-C111490490D9@cs.kuleuven.be> Hi Matthieu, thank you for your prompt reply. On 04 Nov 2008, at 11:24, Matthieu Brucher wrote: >> # dot extension module >> _test = Extension("_test", >> ["test_wrap.c", >> "test.c"], >> include_dirs = [numpy_include,'/data/home/ >> u0038151/include'], >> library_dirs = ['/data/home/u0038151/lib'] >> ) This is where is specify the directory where my header files are, as well as the directory of the library. If I add, from your suggestion libraries = ['gsl'] to that list, it tries to pick up a gsl that is installed in /usr/lib (but there are no headers there.) Then I get the message: python -c "import dot" Traceback (most recent call last): File "", line 1, in ? File "dot.py", line 7, in ? import _dot ImportError: /usr/lib/libgsl.so.0: undefined symbol: cblas_dsdot So it finds the gsl symbols in the system library (not mine), probably combined with my headers, and then has a different error. Giovanni From giovanni.samaey at cs.kuleuven.be Tue Nov 4 06:01:57 2008 From: giovanni.samaey at cs.kuleuven.be (Giovanni Samaey) Date: Tue, 4 Nov 2008 12:01:57 +0100 Subject: [Numpy-discussion] extension module with swig In-Reply-To: <5D56FECF-2E36-46B5-94FF-C111490490D9@cs.kuleuven.be> References: <1E09D27C-C79E-4517-8E64-4B04516F4484@cs.kuleuven.be> <5D56FECF-2E36-46B5-94FF-C111490490D9@cs.kuleuven.be> Message-ID: <8B25772E-C0DB-4619-B16C-62F379C53C07@cs.kuleuven.be> And, additionally setting the environment variable LD_LIBRARY_PATH to start with /data/home/u0038151/lib instead of ending with it, it picks up my own gsl, and gives the error message 0 u0038151 at lo-03-02 dot2 $ python -c "import dot" Traceback (most recent call last): File "", line 1, in ? File "dot.py", line 7, in ? import _dot ImportError: /data/home/u0038151/lib/libgsl.so.0: undefined symbol: cblas_ctrmv Again something different... Giovanni On 04 Nov 2008, at 11:46, Giovanni Samaey wrote: > Hi Matthieu, > > thank you for your prompt reply. > > On 04 Nov 2008, at 11:24, Matthieu Brucher wrote: > >>> # dot extension module >>> _test = Extension("_test", >>> ["test_wrap.c", >>> "test.c"], >>> include_dirs = [numpy_include,'/data/home/ >>> u0038151/include'], >>> library_dirs = ['/data/home/u0038151/lib'] >>> ) > > This is where is specify the directory where my header files are, as > well as the directory of the library. If I add, from your > suggestion libraries = ['gsl'] to that list, it tries to pick up a > gsl that is installed in /usr/lib (but there are no headers there.) > Then I get the message: > > python -c "import dot" > Traceback (most recent call last): > File "", line 1, in ? > File "dot.py", line 7, in ? > import _dot > ImportError: /usr/lib/libgsl.so.0: undefined symbol: cblas_dsdot > > So it finds the gsl symbols in the system library (not mine), > probably combined with my headers, and then has a different error. > > Giovanni From matthieu.brucher at gmail.com Tue Nov 4 06:09:55 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 4 Nov 2008 12:09:55 +0100 Subject: [Numpy-discussion] extension module with swig In-Reply-To: <8B25772E-C0DB-4619-B16C-62F379C53C07@cs.kuleuven.be> References: <1E09D27C-C79E-4517-8E64-4B04516F4484@cs.kuleuven.be> <5D56FECF-2E36-46B5-94FF-C111490490D9@cs.kuleuven.be> <8B25772E-C0DB-4619-B16C-62F379C53C07@cs.kuleuven.be> Message-ID: The issue with the LD_LIBRARY_PATH would come up in any case. You have to put your gsl library folder before the system one if you want your gsl library to be used. For the cblas issue, it seems from Google you have to link against a CBLAS library as well to use the GSL (for instance blas or atlas shoule be enough). Matthieu 2008/11/4 Giovanni Samaey : > And, additionally setting the environment variable LD_LIBRARY_PATH to > start with /data/home/u0038151/lib instead of ending with it, it picks > up my own gsl, and gives the error message > > 0 u0038151 at lo-03-02 dot2 $ python -c "import dot" > Traceback (most recent call last): > File "", line 1, in ? > File "dot.py", line 7, in ? > import _dot > ImportError: /data/home/u0038151/lib/libgsl.so.0: undefined symbol: > cblas_ctrmv > > Again something different... > > Giovanni > > On 04 Nov 2008, at 11:46, Giovanni Samaey wrote: > >> Hi Matthieu, >> >> thank you for your prompt reply. >> >> On 04 Nov 2008, at 11:24, Matthieu Brucher wrote: >> >>>> # dot extension module >>>> _test = Extension("_test", >>>> ["test_wrap.c", >>>> "test.c"], >>>> include_dirs = [numpy_include,'/data/home/ >>>> u0038151/include'], >>>> library_dirs = ['/data/home/u0038151/lib'] >>>> ) >> >> This is where is specify the directory where my header files are, as >> well as the directory of the library. If I add, from your >> suggestion libraries = ['gsl'] to that list, it tries to pick up a >> gsl that is installed in /usr/lib (but there are no headers there.) >> Then I get the message: >> >> python -c "import dot" >> Traceback (most recent call last): >> File "", line 1, in ? >> File "dot.py", line 7, in ? >> import _dot >> ImportError: /usr/lib/libgsl.so.0: undefined symbol: cblas_dsdot >> >> So it finds the gsl symbols in the system library (not mine), >> probably combined with my headers, and then has a different error. >> >> Giovanni > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From sinclaird at ukzn.ac.za Tue Nov 4 05:12:26 2008 From: sinclaird at ukzn.ac.za (Scott Sinclair) Date: Tue, 04 Nov 2008 12:12:26 +0200 Subject: [Numpy-discussion] atlas not found, why? Message-ID: <49103C2A0200009F00037F4D@dbnsmtp.ukzn.ac.za> >> "T J" 11/04/08 12:59 AM On Mon, Nov 3, 2008 at 10:46 AM, T J wrote: > > Since these are all in the standard locations, I am building without a > site.cfg. Here is the beginning info: > >> Apparently, this is not enough. Only if I also set the ATLAS >> environment variable am I able to get this working as expected. >> So can someone explain why I *must* define ATLAS. I tried a number of >> variations on site.cfg and could not get numpy to find atlas with any >> of them. When I upgraded from Ubuntu 8.04 to 8.10 it broke my Numpy install (mighty irritating). I haven't looked in detail, but It seems as if some of the packages related to ATLAS have been renamed and the post-upgrade "cleanup" removed the development packages as it considered them obsolete. I made sure that libatlas-base-dev and libatlas-sse2-dev (this machine supports sse2) were installed and was able to rebuild and install Numpy with no problems. I do not have an ATLAS environment variable. Cheers, Scott Please find our Email Disclaimer here: http://www.ukzn.ac.za/disclaimer/ From giovanni.samaey at cs.kuleuven.be Tue Nov 4 07:41:56 2008 From: giovanni.samaey at cs.kuleuven.be (Giovanni Samaey) Date: Tue, 4 Nov 2008 13:41:56 +0100 Subject: [Numpy-discussion] extension module with swig In-Reply-To: References: <1E09D27C-C79E-4517-8E64-4B04516F4484@cs.kuleuven.be> <5D56FECF-2E36-46B5-94FF-C111490490D9@cs.kuleuven.be> <8B25772E-C0DB-4619-B16C-62F379C53C07@cs.kuleuven.be> Message-ID: <8897B0AA-4860-478F-A0A2-FB62974E2C59@cs.kuleuven.be> Thanks, changing the library path and explicitly adding libraries =['gsl', 'gslcblas'] did the trick ! Thank you so much ! Giovanni On 04 Nov 2008, at 12:09, Matthieu Brucher wrote: > The issue with the LD_LIBRARY_PATH would come up in any case. You have > to put your gsl library folder before the system one if you want your > gsl library to be used. > > For the cblas issue, it seems from Google you have to link against a > CBLAS library as well to use the GSL (for instance blas or atlas > shoule be enough). > > Matthieu > > 2008/11/4 Giovanni Samaey : >> And, additionally setting the environment variable LD_LIBRARY_PATH to >> start with /data/home/u0038151/lib instead of ending with it, it >> picks >> up my own gsl, and gives the error message >> >> 0 u0038151 at lo-03-02 dot2 $ python -c "import dot" >> Traceback (most recent call last): >> File "", line 1, in ? >> File "dot.py", line 7, in ? >> import _dot >> ImportError: /data/home/u0038151/lib/libgsl.so.0: undefined symbol: >> cblas_ctrmv >> >> Again something different... >> >> Giovanni >> >> On 04 Nov 2008, at 11:46, Giovanni Samaey wrote: >> >>> Hi Matthieu, >>> >>> thank you for your prompt reply. >>> >>> On 04 Nov 2008, at 11:24, Matthieu Brucher wrote: >>> >>>>> # dot extension module >>>>> _test = Extension("_test", >>>>> ["test_wrap.c", >>>>> "test.c"], >>>>> include_dirs = [numpy_include,'/data/home/ >>>>> u0038151/include'], >>>>> library_dirs = ['/data/home/u0038151/lib'] >>>>> ) >>> >>> This is where is specify the directory where my header files are, as >>> well as the directory of the library. If I add, from your >>> suggestion libraries = ['gsl'] to that list, it tries to pick up a >>> gsl that is installed in /usr/lib (but there are no headers there.) >>> Then I get the message: >>> >>> python -c "import dot" >>> Traceback (most recent call last): >>> File "", line 1, in ? >>> File "dot.py", line 7, in ? >>> import _dot >>> ImportError: /usr/lib/libgsl.so.0: undefined symbol: cblas_dsdot >>> >>> So it finds the gsl symbols in the system library (not mine), >>> probably combined with my headers, and then has a different error. >>> >>> Giovanni >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> > > > > -- > Information System Engineer, Ph.D. > Website: http://matthieu-brucher.developpez.com/ > Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From cournape at gmail.com Tue Nov 4 09:35:56 2008 From: cournape at gmail.com (David Cournapeau) Date: Tue, 4 Nov 2008 23:35:56 +0900 Subject: [Numpy-discussion] atlas not found, why? In-Reply-To: <49103C2A0200009F00037F4D@dbnsmtp.ukzn.ac.za> References: <49103C2A0200009F00037F4D@dbnsmtp.ukzn.ac.za> Message-ID: <5b8d13220811040635v6e7a0401i305381d71159c686@mail.gmail.com> On Tue, Nov 4, 2008 at 7:12 PM, Scott Sinclair wrote: > When I upgraded from Ubuntu 8.04 to 8.10 it broke my Numpy install (mighty irritating). I haven't looked in detail, but It seems as if some of the packages related to ATLAS have been renamed and the post-upgrade "cleanup" removed the development packages as it considered them obsolete. Yes, it is because of the g77->gfortran transition. The new atlas loaded by default is the gfortran built-one, which is installed by default if you installed atlas before. As they remove g77 at the same time, if you rebuild it, numpy will pick up gfortran, so it works by rebuilding. Fortunately, this won't happen again in a foreseeable future. It means no major distribution use g77 for its standard fortran ABI anymore, except RHEL. cheers, David From dfranci at seas.upenn.edu Tue Nov 4 12:07:34 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Tue, 4 Nov 2008 12:07:34 -0500 Subject: [Numpy-discussion] library linking during numpy build Message-ID: <9fddf64a0811040907m5fb24ddcucfa08ccf04af393b@mail.gmail.com> Hi Everyone, I previously had a problem with installing numpy on a cluster of mine, but it seemed to be resolved. The installation was successful and the numpy code ran well. Unfortunately, this was not the case when I tried to run parallel code. The other processors have difficulty finding a particular library file, libg2c.so.0. I get the following error: ImportError: libg2c.so.0: cannot open shared object file: No such file or directory I have have played around a lot with the LD_LIBRARY_PATH and my .bashrc file, but to no avail. Lisandro D. graciously recommended to me that I should try to hard link the library to the gcc compilers used to build numpy. However, I am not exactly sure how to do this. Is there some option I can put in the setup.py file that will do it? Thanks in advance, Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Tue Nov 4 12:07:28 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 05 Nov 2008 02:07:28 +0900 Subject: [Numpy-discussion] library linking during numpy build In-Reply-To: <9fddf64a0811040907m5fb24ddcucfa08ccf04af393b@mail.gmail.com> References: <9fddf64a0811040907m5fb24ddcucfa08ccf04af393b@mail.gmail.com> Message-ID: <49108150.7080301@ar.media.kyoto-u.ac.jp> Frank Lagor wrote: > Hi Everyone, > > I previously had a problem with installing numpy on a cluster of mine, > but it seemed to be resolved. The installation was successful and the > numpy code ran well. Unfortunately, this was not the case when I > tried to run parallel code. The other processors have difficulty > finding a particular library file, libg2c.so.0. > I get the following error: > ImportError: libg2c.so.0: cannot open shared object file: No such file > or directory libg2c.so is part of the g77 compiler, normally. You need to install it on every machine if you build numpy/scipy with g77 (g2c is part of the fortran runtime), cheers, David From Chris.Barker at noaa.gov Tue Nov 4 12:27:10 2008 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 04 Nov 2008 09:27:10 -0800 Subject: [Numpy-discussion] extension module with swig In-Reply-To: <8897B0AA-4860-478F-A0A2-FB62974E2C59@cs.kuleuven.be> References: <1E09D27C-C79E-4517-8E64-4B04516F4484@cs.kuleuven.be> <5D56FECF-2E36-46B5-94FF-C111490490D9@cs.kuleuven.be> <8B25772E-C0DB-4619-B16C-62F379C53C07@cs.kuleuven.be> <8897B0AA-4860-478F-A0A2-FB62974E2C59@cs.kuleuven.be> Message-ID: <491085EE.8020409@noaa.gov> Giovanni Samaey wrote: > changing the library path and explicitly adding libraries =['gsl', > 'gslcblas'] did the trick ! one other suggestion: if you want this to run any other machine, you might want to build gsl as a static lib, so it will get linked directly into your extension, and your extension will no longer rely on having it installed. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From dfranci at seas.upenn.edu Tue Nov 4 13:08:01 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Tue, 4 Nov 2008 13:08:01 -0500 Subject: [Numpy-discussion] library linking during numpy build In-Reply-To: <49108150.7080301@ar.media.kyoto-u.ac.jp> References: <9fddf64a0811040907m5fb24ddcucfa08ccf04af393b@mail.gmail.com> <49108150.7080301@ar.media.kyoto-u.ac.jp> Message-ID: <9fddf64a0811041008s6a37bc61u4eb4b78109554a61@mail.gmail.com> On Tue, Nov 4, 2008 at 12:07 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Frank Lagor wrote: > > Hi Everyone, > > > > I previously had a problem with installing numpy on a cluster of mine, > > but it seemed to be resolved. The installation was successful and the > > numpy code ran well. Unfortunately, this was not the case when I > > tried to run parallel code. The other processors have difficulty > > finding a particular library file, libg2c.so.0. > > I get the following error: > > ImportError: libg2c.so.0: cannot open shared object file: No such file > > or directory > > libg2c.so is part of the g77 compiler, normally. You need to install it > on every machine if you build numpy/scipy with g77 (g2c is part of the > fortran runtime), > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > Hi David, You are correct-- g77 is not installed on every node. I ssh to each node and perform which g77 and get no response. What is the easiest way to install g77 on every node (WITHOUT ROOT PRIVILEGES)? If I install in my home directory, will it propagate to every node? I have access to a directory which is above my home directory which I use for other installs and is accessible to all nodes. Which do you recommmend? And finally, what env vars need to be set, etc, to ensure that the g77 will be visible to each node? Thanks again, Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Tue Nov 4 16:20:06 2008 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 04 Nov 2008 13:20:06 -0800 Subject: [Numpy-discussion] passing a C array to embedded Python fromC code In-Reply-To: References: <7EFBEC7FA86C1141B59B59EEAEE3294FBC38AE@EMAIL2.exchange.electric.net> Message-ID: <4910BC86.2090900@noaa.gov> Matthieu Brucher wrote: > I've translated it on my blog (http://matt.eifelle.com/) and published > it this morning. Thanks, that's a very helpful article! -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From fperez.net at gmail.com Tue Nov 4 18:59:22 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 4 Nov 2008 15:59:22 -0800 Subject: [Numpy-discussion] (Late) summary of PEP-225 discussion at Scipy In-Reply-To: <291F12FC-FCFB-4152-9A6D-762878D8AF8F@cs.toronto.edu> References: <9EE8FF7F-AE36-4B08-9C7C-D4827E3E1706@cs.toronto.edu> <2b1c8c4f0810270422y620b9472mc1d5c40cfa3e48a8@mail.gmail.com> <49060910.60903@hawaii.edu> <4908BA6F.90909@hawaii.edu> <3d375d730810291243u565cc464u39d3b1adb788af7@mail.gmail.com> <291F12FC-FCFB-4152-9A6D-762878D8AF8F@cs.toronto.edu> Message-ID: Hi everyone, thanks for all the feedback. Last call on this one. If nobody objects to the language that's written here: https://cirl.berkeley.edu/fperez/static/numpy-pep225/ in a couple of days I'll toss this over to python-dev. At that point it will be up to Guido and that team to decide what to do with the pep. I encourage everyone with an interest in this topic to provide the python-dev team with further feedback if they raise any questions (they likely will). I don't expect them to come over to the numpy list to ask us questions. I will let people know when I send the info to python-dev so you can keep an eye on that list (which can be browsed via gmane, so you don't even need to subscribe). Regards, f From jason-sage at creativetrax.com Tue Nov 4 19:54:17 2008 From: jason-sage at creativetrax.com (jason-sage at creativetrax.com) Date: Tue, 04 Nov 2008 18:54:17 -0600 Subject: [Numpy-discussion] (Late) summary of PEP-225 discussion at Scipy In-Reply-To: References: <9EE8FF7F-AE36-4B08-9C7C-D4827E3E1706@cs.toronto.edu> <2b1c8c4f0810270422y620b9472mc1d5c40cfa3e48a8@mail.gmail.com> <49060910.60903@hawaii.edu> <4908BA6F.90909@hawaii.edu> <3d375d730810291243u565cc464u39d3b1adb788af7@mail.gmail.com> <291F12FC-FCFB-4152-9A6D-762878D8AF8F@cs.toronto.edu> Message-ID: <4910EEB9.5000300@creativetrax.com> Fernando Perez wrote: > Hi everyone, > > thanks for all the feedback. Last call on this one. If nobody > objects to the language that's written here: > > https://cirl.berkeley.edu/fperez/static/numpy-pep225/ > one small typo: in the "Why just go one step?" section, you have the phrase: "and it is thus wortwhile solving the general problem" wortwhile should be worthwhile. I've posted a message about this to sage-devel, as they are likely interested in this topic as well. Thanks, Jason From fperez.net at gmail.com Tue Nov 4 21:21:05 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 4 Nov 2008 18:21:05 -0800 Subject: [Numpy-discussion] (Late) summary of PEP-225 discussion at Scipy In-Reply-To: <4910EEB9.5000300@creativetrax.com> References: <49060910.60903@hawaii.edu> <4908BA6F.90909@hawaii.edu> <3d375d730810291243u565cc464u39d3b1adb788af7@mail.gmail.com> <291F12FC-FCFB-4152-9A6D-762878D8AF8F@cs.toronto.edu> <4910EEB9.5000300@creativetrax.com> Message-ID: On Tue, Nov 4, 2008 at 4:54 PM, wrote: > one small typo: in the "Why just go one step?" section, you have the phrase: > > "and it is thus wortwhile solving the general problem" > > wortwhile should be worthwhile. > Thanks, fixed. > I've posted a message about this to sage-devel, as they are likely > interested in this topic as well. Thanks. I did mention it a while ago on the sage lists, and asked them at scipy'08. William said he wasn't terribly interested one way or another. It's worth keeping in mind that Sage already preparses the user input and has thus effectively the freedom to define its own syntax. But others may have different opinions from William on this, and I'll be glad to include any further input from the Sage team. Regards, f From charlesr.harris at gmail.com Wed Nov 5 00:26:38 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 4 Nov 2008 22:26:38 -0700 Subject: [Numpy-discussion] New ufuncs Message-ID: Hi All, I'm thinking of adding some new ufuncs. Some possibilities are - expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs: - absdiff(a,b) = abs(a - b) -- Useful for forming norms - absmax(a,b) = max(abs(a), abs(b)) - absadd(a,b) = abs(a) + abs(b) -- Useful for L_1 norm and inequalities? I would really like a powadd = abs(a)**p + abs(b)**p, but I can't think of an easy way to pass a variable p that is compatible with the way ufuncs work without going to three variables, something that might not work with the reduce functions. Along these lines I also think it is time to review generalized ufuncs to see what we can do with them. Thoughts? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aarchiba at physics.mcgill.ca Wed Nov 5 00:37:23 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Wed, 5 Nov 2008 00:37:23 -0500 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: Message-ID: 2008/11/5 Charles R Harris : > Hi All, > > I'm thinking of adding some new ufuncs. Some possibilities are > > expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs: Surely this should be log(exp(a)+exp(b))? That would be extremely useful, yes. > absdiff(a,b) = abs(a - b) -- Useful for forming norms > absmax(a,b) = max(abs(a), abs(b)) > absadd(a,b) = abs(a) + abs(b) -- Useful for L_1 norm and inequalities? These I find less exciting, since they can be written easily in terms of existing ufuncs. (The expadd can't without an if statement unless you want range errors.) There is some small gain in having fewer temporaries, but as it stands now each ufunc lives in a byzantine nest of code and is hard to scrutinize. > I would really like a powadd = abs(a)**p + abs(b)**p, but I can't think of > an easy way to pass a variable p that is compatible with the way ufuncs work > without going to three variables, something that might not work with the > reduce functions. Along these lines I also think it is time to review > generalized ufuncs to see what we can do with them. Thoughts? It's worth checking what reduce does on ternary ufuncs. It's not clear what it *should* do. Anne From charlesr.harris at gmail.com Wed Nov 5 01:03:38 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 4 Nov 2008 23:03:38 -0700 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: Message-ID: On Tue, Nov 4, 2008 at 10:37 PM, Anne Archibald wrote: > 2008/11/5 Charles R Harris : > > Hi All, > > > > I'm thinking of adding some new ufuncs. Some possibilities are > > > > expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs: > > Surely this should be log(exp(a)+exp(b))? That would be extremely useful, > yes. > Yes, but the common case seems to be log(sum_i(exp(a_i))), which would be inefficient if implemented with addexp.reduce. > > > absdiff(a,b) = abs(a - b) -- Useful for forming norms > > absmax(a,b) = max(abs(a), abs(b)) > > absadd(a,b) = abs(a) + abs(b) -- Useful for L_1 norm and inequalities? > > These I find less exciting, since they can be written easily in terms > of existing ufuncs. (The expadd can't without an if statement unless > you want range errors.) There is some small gain in having fewer > temporaries, but as it stands now each ufunc lives in a byzantine nest > of code and is hard to scrutinize. > The inner loops are very simple, take a look at the current umathmodule. It's all the cruft up top that needs to be clarified. But I was mostly thinking of temporaries and the speed up for small arrays of not having as much call overhead. > > > I would really like a powadd = abs(a)**p + abs(b)**p, but I can't think > of > > an easy way to pass a variable p that is compatible with the way ufuncs > work > > without going to three variables, something that might not work with the > > reduce functions. Along these lines I also think it is time to review > > generalized ufuncs to see what we can do with them. Thoughts? > How about an abspower(a,p) = abs(a)**p ? That might be more useful. > > It's worth checking what reduce does on ternary ufuncs. It's not clear > what it *should* do. > I suppose a three variable recursion would be the most natural extension, but probably not that useful. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjhnson at gmail.com Wed Nov 5 01:05:10 2008 From: tjhnson at gmail.com (T J) Date: Tue, 4 Nov 2008 22:05:10 -0800 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: Message-ID: On Tue, Nov 4, 2008 at 9:37 PM, Anne Archibald wrote: > 2008/11/5 Charles R Harris : >> Hi All, >> >> I'm thinking of adding some new ufuncs. Some possibilities are >> >> expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs: > > Surely this should be log(exp(a)+exp(b))? That would be extremely useful, yes. > +1 But shouldn't it be called 'logadd', for adding values which are stored as logs? http://www.lri.fr/~pierres/donn%E9es/save/these/torch/docs/manual/logAdd.html I would also really enjoy a logdot function, to be used when working with arrays whose elements are log values. From charlesr.harris at gmail.com Wed Nov 5 01:29:31 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 4 Nov 2008 23:29:31 -0700 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: Message-ID: On Tue, Nov 4, 2008 at 11:05 PM, T J wrote: > On Tue, Nov 4, 2008 at 9:37 PM, Anne Archibald > wrote: > > 2008/11/5 Charles R Harris : > >> Hi All, > >> > >> I'm thinking of adding some new ufuncs. Some possibilities are > >> > >> expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs: > > > > Surely this should be log(exp(a)+exp(b))? That would be extremely useful, > yes. > > > > +1 > > But shouldn't it be called 'logadd', for adding values which are stored as > logs? > Hmm... but I'm thinking one has to be clever here because the main reason I heard for using logs was that normal floating point numbers had insufficient range. So maybe something like logadd(a,b) = a + log(1 + exp(b - a)) where a > b ? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aarchiba at physics.mcgill.ca Wed Nov 5 01:41:56 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Wed, 5 Nov 2008 01:41:56 -0500 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: Message-ID: On 05/11/2008, Charles R Harris wrote: > > > On Tue, Nov 4, 2008 at 11:05 PM, T J wrote: > > On Tue, Nov 4, 2008 at 9:37 PM, Anne Archibald > > > > wrote: > > > > > 2008/11/5 Charles R Harris : > > >> Hi All, > > >> > > >> I'm thinking of adding some new ufuncs. Some possibilities are > > >> > > >> expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs: > > > > > > Surely this should be log(exp(a)+exp(b))? That would be extremely > useful, yes. > > > > > > > +1 > > > > But shouldn't it be called 'logadd', for adding values which are stored as > logs? > > > > Hmm... but I'm thinking one has to be clever here because the main reason I > heard for using logs was that normal floating point numbers had insufficient > range. So maybe something like > > logadd(a,b) = a + log(1 + exp(b - a)) > > where a > b ? That's the usual way to do it, yes. I'd use log1p(exp(b-a)) for a little extra accuracy, though it probably doesn't matter. And yes, using logadd.reduce() is not the most efficient way to get a logsum(); no reason it can't be a separate function. As T J says, a logdot() would come in handy too. A python implementation is a decent first pass, but logdot() in particular would benefit from a C implementation. Anne From david at ar.media.kyoto-u.ac.jp Wed Nov 5 01:48:42 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 05 Nov 2008 15:48:42 +0900 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: Message-ID: <491141CA.3060901@ar.media.kyoto-u.ac.jp> Charles R Harris wrote: > > Hmm... but I'm thinking one has to be clever here because the main > reason I heard for using logs was that normal floating point numbers > had insufficient range. So maybe something like > > logadd(a,b) = a + log(1 + exp(b - a)) > > where a > b ? > Yes, that's the idea. AFAIK, that's generally known as logsumexp algorithm, at least in the machine learning community, I opened a task ticket on it, but I have not done any work on it: http://projects.scipy.org/scipy/numpy/ticket/765 cheers, David From charlesr.harris at gmail.com Wed Nov 5 02:13:29 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 5 Nov 2008 00:13:29 -0700 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: Message-ID: On Tue, Nov 4, 2008 at 11:41 PM, Anne Archibald wrote: > On 05/11/2008, Charles R Harris wrote: > > > > > > On Tue, Nov 4, 2008 at 11:05 PM, T J wrote: > > > On Tue, Nov 4, 2008 at 9:37 PM, Anne Archibald > > > > > > wrote: > > > > > > > 2008/11/5 Charles R Harris : > > > >> Hi All, > > > >> > > > >> I'm thinking of adding some new ufuncs. Some possibilities are > > > >> > > > >> expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs: > > > > > > > > Surely this should be log(exp(a)+exp(b))? That would be extremely > > useful, yes. > > > > > > > > > > +1 > > > > > > But shouldn't it be called 'logadd', for adding values which are stored > as > > logs? > > > > > > > Hmm... but I'm thinking one has to be clever here because the main reason > I > > heard for using logs was that normal floating point numbers had > insufficient > > range. So maybe something like > > > > logadd(a,b) = a + log(1 + exp(b - a)) > > > > where a > b ? > > That's the usual way to do it, yes. I'd use log1p(exp(b-a)) for a > little extra accuracy, though it probably doesn't matter. And yes, > using logadd.reduce() is not the most efficient way to get a logsum(); But probably the best bet here. So, should I add this function? T J's link also mentioned a logsub, which might be more problematic because taking logs of negatives isn't going to work... Although that shouldn't happen if the probability logic is right and roundoff error is small. > > no reason it can't be a separate function. As T J says, a logdot() > would come in handy too. A python implementation is a decent first > pass, but logdot() in particular would benefit from a C > implementation. > Are these likely to be big arrays? It shouldn't be too hard to make a logdot once a logadd function is out there. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From giovanni.samaey at cs.kuleuven.be Wed Nov 5 05:30:43 2008 From: giovanni.samaey at cs.kuleuven.be (Giovanni Samaey) Date: Wed, 5 Nov 2008 11:30:43 +0100 Subject: [Numpy-discussion] random number generation in python compared to gsl Message-ID: <30E2F219-B27A-4872-8BC7-6194537A6F8C@cs.kuleuven.be> Hi all, I have a question concerning the Mersenne Twister random number generation in numpy: when I seed it with 0, I get a different sequence of numbers in numpy, compared to GSL. In numpy: r = numpy.Random.RandomState(seed=0) r.uniform(size=5) ----> array([ 0.5488135 , 0.71518937, 0.60276338, 0.54488318, 0.4236548 ]) whereas in GSL the first numbers are 0.99974 0.16291 0.2826 0.94720 0.23166 Matlab gives the same result as numpy... I have translated some python code to c, and would like to debug it -- therefore, I would like to have exactly the same set of random numbers... How can I provoke this ? Best. Giovanni -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Wed Nov 5 06:28:05 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 5 Nov 2008 12:28:05 +0100 Subject: [Numpy-discussion] random number generation in python compared to gsl In-Reply-To: <30E2F219-B27A-4872-8BC7-6194537A6F8C@cs.kuleuven.be> References: <30E2F219-B27A-4872-8BC7-6194537A6F8C@cs.kuleuven.be> Message-ID: On Wed, Nov 5, 2008 at 11:30 AM, Giovanni Samaey wrote: > Hi all, > I have a question concerning the Mersenne Twister random number generation > in numpy: when I seed it with 0, I get a different sequence of numbers in > numpy, compared to GSL. > In numpy: > r = numpy.Random.RandomState(seed=0) > r.uniform(size=5) ----> array([ 0.5488135 , 0.71518937, 0.60276338, > 0.54488318, 0.4236548 ]) > whereas in GSL the first numbers are > 0.99974 0.16291 0.2826 0.94720 0.23166 > Matlab gives the same result as numpy... > I have translated some python code to c, and would like to debug it -- > therefore, I would like to have exactly the same set of random numbers... > How can I provoke this ? > Best. > Giovanni Hi, how about other seed values ? I thought seed=0, is (often) used to mean a "random", i.e. current time or alike, seed value ... !? -Sebastian Haase From david at ar.media.kyoto-u.ac.jp Wed Nov 5 06:27:53 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 05 Nov 2008 20:27:53 +0900 Subject: [Numpy-discussion] random number generation in python compared to gsl In-Reply-To: References: <30E2F219-B27A-4872-8BC7-6194537A6F8C@cs.kuleuven.be> Message-ID: <49118339.4090503@ar.media.kyoto-u.ac.jp> Sebastian Haase wrote: > > Hi, > how about other seed values ? I thought seed=0, is (often) used to > mean a "random", i.e. current time or alike, seed value ... !? > Not really. A fixed seed means you will always get the exact same serie of numbers. The seed is the initial condition of your random generator, and a random generator is a totally predictable, deterministic process. The (pseudo) randomness is a consequence of having a random, unknown seed, causing the serie to behave seemingly random to most statistical tests. This can be time, keystrokes, etc... cheers, David From ckkart at hoc.net Wed Nov 5 08:45:45 2008 From: ckkart at hoc.net (Christian K.) Date: Wed, 05 Nov 2008 14:45:45 +0100 Subject: [Numpy-discussion] nina.riegel@osram-os.com Message-ID: Hallo Nina, ich huete gerade meinen kranken Sohn, wollte aber nicht versaeumen, Platten zu reservieren: Januar bis einschliesslich Juni 2009 haette ich gerne 2 Platten pro Monat gruesse, Christian From giovanni.samaey at cs.kuleuven.be Wed Nov 5 09:05:19 2008 From: giovanni.samaey at cs.kuleuven.be (Giovanni Samaey) Date: Wed, 5 Nov 2008 15:05:19 +0100 Subject: [Numpy-discussion] random number generation in python compared to gsl In-Reply-To: References: <30E2F219-B27A-4872-8BC7-6194537A6F8C@cs.kuleuven.be> Message-ID: <6D71EFCC-79B0-435B-B731-611C0D6A5EE1@cs.kuleuven.be> > > Hi, > how about other seed values ? I thought seed=0, is (often) used to > mean a "random", i.e. current time or alike, seed value ... !? Not in this case: I always get the same sequence with seed=0 (different for both implementation, but the same each time I run it.) I got around it by installing pygsl and taking random numbers from there instead of from numpy. But I still find it strange to get two different sequences from two implementation that claim to be the same algorithm... Giovanni > > > -Sebastian Haase > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From matthieu.brucher at gmail.com Wed Nov 5 09:19:09 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 5 Nov 2008 15:19:09 +0100 Subject: [Numpy-discussion] random number generation in python compared to gsl In-Reply-To: <6D71EFCC-79B0-435B-B731-611C0D6A5EE1@cs.kuleuven.be> References: <30E2F219-B27A-4872-8BC7-6194537A6F8C@cs.kuleuven.be> <6D71EFCC-79B0-435B-B731-611C0D6A5EE1@cs.kuleuven.be> Message-ID: > Not in this case: I always get the same sequence with seed=0 > (different for both implementation, but the same each time I run it.) > I got around it by installing pygsl and taking random numbers from > there instead of from numpy. > > But I still find it strange to get two different sequences from two > implementation that claim to be the same algorithm... > > Giovanni Hi, I didn't check which MT was used, because there are several different. MT is based on one Mersenne prime, but for instance the Boost library wraps two generators with two different values. Perhaps the same occurs here. Other explanations include: - MT must use an array of starting values, perhaps the first is 0, but one is 0, 1, 2, 3, ... and the other uses 0, 0, 0, ... - MT generates integers, perhaps there is a difference between the size of the generated integers (not likely) or a difference in the transformation into a float ? Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From ggellner at uoguelph.ca Wed Nov 5 10:09:08 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Wed, 5 Nov 2008 10:09:08 -0500 Subject: [Numpy-discussion] random number generation in python compared to gsl In-Reply-To: References: <30E2F219-B27A-4872-8BC7-6194537A6F8C@cs.kuleuven.be> <6D71EFCC-79B0-435B-B731-611C0D6A5EE1@cs.kuleuven.be> Message-ID: <20081105150908.GA8596@encolpuis> On Wed, Nov 05, 2008 at 03:19:09PM +0100, Matthieu Brucher wrote: > > Not in this case: I always get the same sequence with seed=0 > > (different for both implementation, but the same each time I run it.) > > I got around it by installing pygsl and taking random numbers from > > there instead of from numpy. > > > > But I still find it strange to get two different sequences from two > > implementation that claim to be the same algorithm... > > > > Giovanni > > Hi, > > I didn't check which MT was used, because there are several different. > MT is based on one Mersenne prime, but for instance the Boost library > wraps two generators with two different values. Perhaps the same > occurs here. > Other explanations include: > - MT must use an array of starting values, perhaps the first is 0, but > one is 0, 1, 2, 3, ... and the other uses 0, 0, 0, ... This is the key, there is a lot of variation in the seeding algorithm that is used in mersenne twister algorithms, usually they use some kind of linear congruential algorithm to get started. I think that GSL uses the published mersenne twister seed algorithm, what you can do is find out how gsl does this and just set the full seed array in numpy.random.RandomState yourself. I have done this for comparison with R's mersenne twister algorithm using my own MT implementation and it works like a charm. Gabriel From aisaac at american.edu Wed Nov 5 10:13:37 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 05 Nov 2008 10:13:37 -0500 Subject: [Numpy-discussion] New ufuncs In-Reply-To: <491141CA.3060901@ar.media.kyoto-u.ac.jp> References: <491141CA.3060901@ar.media.kyoto-u.ac.jp> Message-ID: <4911B821.4030409@american.edu> > Charles R Harris wrote: >> Hmm... but I'm thinking one has to be clever here because the main >> reason I heard for using logs was that normal floating point numbers >> had insufficient range. So maybe something like >> >> logadd(a,b) = a + log(1 + exp(b - a)) >> >> where a > b ? On 11/5/2008 1:48 AM David Cournapeau apparently wrote: > Yes, that's the idea. AFAIK, that's generally known as logsumexp > algorithm, at least in the machine learning community, I opened a task > ticket on it, but I have not done any work on it: > > http://projects.scipy.org/scipy/numpy/ticket/765 Of possible relevance (BSD license): http://code.google.com/p/pyspkrec/source/browse/pyspkrec/gmm.py?r=109 (Search on logsumexp.) Alan Isaac From robert.kern at gmail.com Wed Nov 5 13:01:03 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 5 Nov 2008 12:01:03 -0600 Subject: [Numpy-discussion] random number generation in python compared to gsl In-Reply-To: <6D71EFCC-79B0-435B-B731-611C0D6A5EE1@cs.kuleuven.be> References: <30E2F219-B27A-4872-8BC7-6194537A6F8C@cs.kuleuven.be> <6D71EFCC-79B0-435B-B731-611C0D6A5EE1@cs.kuleuven.be> Message-ID: <3d375d730811051001r4fc45a1et49996f56ab20bdf4@mail.gmail.com> On Wed, Nov 5, 2008 at 08:05, Giovanni Samaey wrote: >> >> Hi, >> how about other seed values ? I thought seed=0, is (often) used to >> mean a "random", i.e. current time or alike, seed value ... !? > > Not in this case: I always get the same sequence with seed=0 > (different for both implementation, but the same each time I run it.) > I got around it by installing pygsl and taking random numbers from > there instead of from numpy. > > But I still find it strange to get two different sequences from two > implementation that claim to be the same algorithm... GSL has this bit of code: if (s == 0) s = 4357; /* the default seed is 4357 */ We don't. Otherwise, I believe the two seeding algorithms are identical. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndbecker2 at gmail.com Wed Nov 5 14:01:11 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 05 Nov 2008 14:01:11 -0500 Subject: [Numpy-discussion] New ufuncs References: Message-ID: Anne Archibald wrote: > 2008/11/5 Charles R Harris : >> Hi All, >> >> I'm thinking of adding some new ufuncs. Some possibilities are >> >> expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs: > > Surely this should be log(exp(a)+exp(b))? That would be extremely useful, > yes. > I could probably use this also. What about log (exp(a)+exp(b)+exp(c)...)? From charlesr.harris at gmail.com Wed Nov 5 15:00:08 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 5 Nov 2008 13:00:08 -0700 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: Message-ID: On Wed, Nov 5, 2008 at 12:01 PM, Neal Becker wrote: > Anne Archibald wrote: > > > 2008/11/5 Charles R Harris : > >> Hi All, > >> > >> I'm thinking of adding some new ufuncs. Some possibilities are > >> > >> expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs: > > > > Surely this should be log(exp(a)+exp(b))? That would be extremely useful, > > yes. > > > I could probably use this also. What about log (exp(a)+exp(b)+exp(c)...)? > I added the ufunc logsumexp. The extended add should be done with recursive adds to preserve precision, so: In [3]: logsumexp.reduce(ones(10)) Out[3]: 3.3025850929940459 In [5]: logsumexp.reduce(eye(3), axis=0) Out[5]: array([ 1.55144471, 1.55144471, 1.55144471]) It looks like this is a good way to compute L_p norms for large p, i.e., exp(logsumexp.reduce(log(abs(x))*p)/p). Adding a logabs ufunc would be helpful here. Hmm.... I wonder if the base function should be renamed logaddexp, then logsumexp would apply to the reduce method. Thoughts? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjhnson at gmail.com Wed Nov 5 15:45:58 2008 From: tjhnson at gmail.com (T J) Date: Wed, 5 Nov 2008 12:45:58 -0800 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: Message-ID: On Wed, Nov 5, 2008 at 12:00 PM, Charles R Harris wrote: > Hmm.... I wonder if the base function should be renamed logaddexp, then > logsumexp would apply to the reduce method. Thoughts? > As David mentioned, logsumexp is probably the traditional name, but as the earlier link shows, it also goes by logadd. Given the distinction between add (a ufunc) and sum (something done over an axis) within numpy, it seems that logadd or logaddexp is probably a more fitting name. So long as it is documented, I doubt it matters much though... From ckkart at hoc.net Wed Nov 5 15:55:28 2008 From: ckkart at hoc.net (Christian K.) Date: Wed, 05 Nov 2008 21:55:28 +0100 Subject: [Numpy-discussion] nina.riegel@osram-os.com In-Reply-To: References: Message-ID: darn! How could I be that stupid... Please ignore the last message. Christian From stefan at sun.ac.za Wed Nov 5 16:41:37 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 5 Nov 2008 23:41:37 +0200 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: Message-ID: <9457e7c80811051341r5ba8b327id881841f196b67ac@mail.gmail.com> 2008/11/5 T J : > numpy, it seems that logadd or logaddexp is probably a more fitting > name. So long as it is documented, I doubt it matters much though... Please don't call it logadd. `logaddexp` or `logsumexp` are both fine, but the `exp` part is essential in emphasising that you are not calculating a+b using logs. Cheers St?fan From charlesr.harris at gmail.com Wed Nov 5 17:09:17 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 5 Nov 2008 15:09:17 -0700 Subject: [Numpy-discussion] New ufuncs In-Reply-To: <9457e7c80811051341r5ba8b327id881841f196b67ac@mail.gmail.com> References: <9457e7c80811051341r5ba8b327id881841f196b67ac@mail.gmail.com> Message-ID: On Wed, Nov 5, 2008 at 2:41 PM, St?fan van der Walt wrote: > 2008/11/5 T J : > > numpy, it seems that logadd or logaddexp is probably a more fitting > > name. So long as it is documented, I doubt it matters much though... > > Please don't call it logadd. `logaddexp` or `logsumexp` are both > fine, but the `exp` part is essential in emphasising that you are not > calculating a+b using logs. > I'm inclined to go with logaddexp and add logsumexp as an alias for logaddexp.reduce. But I'll wait until tomorrow to see if there are more comments. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Nov 5 23:12:32 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 5 Nov 2008 21:12:32 -0700 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: <9457e7c80811051341r5ba8b327id881841f196b67ac@mail.gmail.com> Message-ID: On Wed, Nov 5, 2008 at 3:09 PM, Charles R Harris wrote: > > > On Wed, Nov 5, 2008 at 2:41 PM, St?fan van der Walt wrote: > >> 2008/11/5 T J : >> > numpy, it seems that logadd or logaddexp is probably a more fitting >> > name. So long as it is documented, I doubt it matters much though... >> >> Please don't call it logadd. `logaddexp` or `logsumexp` are both >> fine, but the `exp` part is essential in emphasising that you are not >> calculating a+b using logs. >> > > I'm inclined to go with logaddexp and add logsumexp as an alias for > logaddexp.reduce. But I'll wait until tomorrow to see if there are more > comments. > Some timings of ufunc vs implementation with currently available functions. I've done the ufunc as logaddexp and defined currently corresponding functions as logadd and logsum just for quick convenience. Results: In [15]: def logsum(x) : ....: off = x.max(axis=0) ....: return off + log(sum(exp(x - off), axis=0)) ....: In [57]: def logadd(x,y) : max1 = maximum(x,y) min1 = minimum(x,y) return max1 + log1p(exp(min1 - max1)) ....: In [61]: a = np.random.random(size=(1000,1000)) In [62]: b = np.random.random(size=(1000,1000)) In [63]: time x = logadd(a,b) CPU times: user 0.15 s, sys: 0.02 s, total: 0.17 s Wall time: 0.17 s In [65]: time x = logaddexp(a,b) CPU times: user 0.12 s, sys: 0.00 s, total: 0.13 s Wall time: 0.13 s In [67]: time x = logsum(a) CPU times: user 0.10 s, sys: 0.01 s, total: 0.11 s Wall time: 0.11 s In [69]: time x = logaddexp.reduce(a, axis=0) CPU times: user 0.14 s, sys: 0.00 s, total: 0.14 s Wall time: 0.14 s It looks like a ufunc implementation is just a bit faster for adding two arrays but for summing along axis logsum is a bit faster. This isn't unexpected because repeated calls to logaddexp isn't the most efficient way to sum. For smaller arrays, say 10x10 the ufunc wins in both cases by significant margins (like 2x) because of function call overhead. What sort of numbers do folks typically use? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From crleblanc at gmail.com Wed Nov 5 23:18:44 2008 From: crleblanc at gmail.com (Chris LeBlanc) Date: Thu, 6 Nov 2008 17:18:44 +1300 Subject: [Numpy-discussion] passing a C array to embedded Python fromC code In-Reply-To: References: <7EFBEC7FA86C1141B59B59EEAEE3294FBC38AE@EMAIL2.exchange.electric.net> Message-ID: Hi Matthieu, Thanks very much, thats exactly the sort of information I was looking for. I'm heading to a conference this weekend, but hope to get started on this very soon. Cheers, Chris On Tue, Nov 4, 2008 at 10:34 PM, Matthieu Brucher wrote: > Hi, > > I've translated it on my blog (http://matt.eifelle.com/) and published > it this morning. > > Matthieu > > 2008/10/30 Anthony Floyd : >> Hi Chris, >> >>> Matthieu Brucher wrote: >>> > If you can >>> > follow a French tutorial, you can go on >>> > >>> http://matthieu-brucher.developpez.com/tutoriels/python/swig-numpy/#LV >>> > to have a skeletton for your issue. >>> >>> That looks very useful -- any chance of an English >>> translation? My one >>> year of high school French is proving useless. Otherwise, the code >>> itself is still quite helpful. >>> >> >> Google does a pretty good job on this one: >> >> http://translate.google.com/translate?u=http%3A%2F%2Fmatthieu-brucher.developpez.com%2Ftutoriels%2Fpython%2Fswig-numpy%2F%23LV&sl=fr&tl=en&hl=en&ie=UTF-8 >> >> Anthony. >> >> -- >> Anthony Floyd, PhD >> Convergent Manufacturing Technologies Inc. >> 6190 Agronomy Rd, Suite 403 >> Vancouver BC V6T 1Z3 >> CANADA >> >> Email: Anthony.Floyd at convergent.ca | Tel: 604-822-9682 x102 >> WWW: http://www.convergent.ca | Fax: 604-822-9659 >> >> CMT is hiring: See http://www.convergent.ca for details >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> > > > > -- > Information System Engineer, Ph.D. > Website: http://matthieu-brucher.developpez.com/ > Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From nwagner at iam.uni-stuttgart.de Thu Nov 6 02:20:35 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 06 Nov 2008 08:20:35 +0100 Subject: [Numpy-discussion] I/O options Message-ID: Hi all, How can I save an array to a file with the following so called small field format (NASTRAN). Each row consists of ten fields of eight characters each. Field 10 is used only for optional continuation information when applicable. 12345678123456781234567812345678123456781234567812345678123456781234567812345678 xxxxxxxxyyyyyyyyzzzzzzzz........ An example would be appreciated. Nils From matthieu.brucher at gmail.com Thu Nov 6 03:30:17 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 6 Nov 2008 09:30:17 +0100 Subject: [Numpy-discussion] passing a C array to embedded Python fromC code In-Reply-To: References: <7EFBEC7FA86C1141B59B59EEAEE3294FBC38AE@EMAIL2.exchange.electric.net> Message-ID: Hi Chris, No problem, the Internet is made for sharing ideas and solutions ;) Matthieu 2008/11/6 Chris LeBlanc : > Hi Matthieu, > > Thanks very much, thats exactly the sort of information I was looking > for. I'm heading to a conference this weekend, but hope to get > started on this very soon. > > Cheers, > Chris > > On Tue, Nov 4, 2008 at 10:34 PM, Matthieu Brucher > wrote: >> Hi, >> >> I've translated it on my blog (http://matt.eifelle.com/) and published >> it this morning. >> >> Matthieu >> >> 2008/10/30 Anthony Floyd : >>> Hi Chris, >>> >>>> Matthieu Brucher wrote: >>>> > If you can >>>> > follow a French tutorial, you can go on >>>> > >>>> http://matthieu-brucher.developpez.com/tutoriels/python/swig-numpy/#LV >>>> > to have a skeletton for your issue. >>>> >>>> That looks very useful -- any chance of an English >>>> translation? My one >>>> year of high school French is proving useless. Otherwise, the code >>>> itself is still quite helpful. >>>> >>> >>> Google does a pretty good job on this one: >>> >>> http://translate.google.com/translate?u=http%3A%2F%2Fmatthieu-brucher.developpez.com%2Ftutoriels%2Fpython%2Fswig-numpy%2F%23LV&sl=fr&tl=en&hl=en&ie=UTF-8 >>> >>> Anthony. >>> >>> -- >>> Anthony Floyd, PhD >>> Convergent Manufacturing Technologies Inc. >>> 6190 Agronomy Rd, Suite 403 >>> Vancouver BC V6T 1Z3 >>> CANADA >>> >>> Email: Anthony.Floyd at convergent.ca | Tel: 604-822-9682 x102 >>> WWW: http://www.convergent.ca | Fax: 604-822-9659 >>> >>> CMT is hiring: See http://www.convergent.ca for details >>> >>> _______________________________________________ >>> Numpy-discussion mailing list >>> Numpy-discussion at scipy.org >>> http://projects.scipy.org/mailman/listinfo/numpy-discussion >>> >> >> >> >> -- >> Information System Engineer, Ph.D. >> Website: http://matthieu-brucher.developpez.com/ >> Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 >> LinkedIn: http://www.linkedin.com/in/matthieubrucher >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From philbinj at gmail.com Thu Nov 6 09:53:27 2008 From: philbinj at gmail.com (James Philbin) Date: Thu, 6 Nov 2008 14:53:27 +0000 Subject: [Numpy-discussion] Nasty bug with recarray and cPickle Message-ID: <2b1c8c4f0811060653u6dd1cc00t6431a97f1d3e0ee0@mail.gmail.com> Hi, I might be doing something stupid so I thought i'd check here before filing a bug report. Firstly: In [8]: np.__version__ Out[8]: '1.3.0.dev5883' Basically, pickling an element from a recarray seems to break silently: In [1]: import numpy as np In [2]: dtype = [('r','f4'),('g','f4'),('b','f4')] In [3]: arr = np.ones((10,), dtype=dtype) In [4]: arr Out[4]: array([(1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0)], dtype=[('r', ' Hi all, What can be done if the new shape is not compatible with the original shape ? The number of columns is fixed and should be 8. One could split the original array C >>> C array([[ 0.00000000e+00, 1.00000000e-01], [ 4.15000000e+01, 1.00000000e-01], [ 4.16000000e+01, 1.00000000e-01], [ 6.07500000e+01, 1.00000000e-01], [ 6.08500000e+01, 1.00000000e-01], [ 8.22550000e+01, 1.00000000e-01], [ 8.23550000e+01, 1.00000000e-01], [ 9.42550000e+01, 1.00000000e-01], [ 9.43550000e+01, 1.00000000e-01], [ 9.99100000e+01, 1.00000000e-01], [ 1.00010000e+02, 1.00000000e-01], [ 1.07660000e+02, 1.00000000e-01], [ 1.07760000e+02, 1.00000000e-01], [ 1.28000000e+02, 1.00000000e-01], [ 1.28100000e+02, 1.00000000e-01]]) >>> shape(C) (15, 2) >>> A = C[:8,:] >>> A array([[ 0. , 0.1 ], [ 41.5 , 0.1 ], [ 41.6 , 0.1 ], [ 60.75 , 0.1 ], [ 60.85 , 0.1 ], [ 82.255, 0.1 ], [ 82.355, 0.1 ], [ 94.255, 0.1 ]]) >>> A = reshape(A,(2,8)) >>> A array([[ 0. , 0.1 , 41.5 , 0.1 , 41.6 , 0.1 , 60.75 , 0.1 ], [ 60.85 , 0.1 , 82.255, 0.1 , 82.355, 0.1 , 94.255, 0.1 ]]) >>> B = C[8:,:] >>> B array([[ 9.43550000e+01, 1.00000000e-01], [ 9.99100000e+01, 1.00000000e-01], [ 1.00010000e+02, 1.00000000e-01], [ 1.07660000e+02, 1.00000000e-01], [ 1.07760000e+02, 1.00000000e-01], [ 1.28000000e+02, 1.00000000e-01], [ 1.28100000e+02, 1.00000000e-01]]) >>> B = reshape(B,(2,7)) >>> B array([[ 9.43550000e+01, 1.00000000e-01, 9.99100000e+01, 1.00000000e-01, 1.00010000e+02, 1.00000000e-01, 1.07660000e+02], [ 1.00000000e-01, 1.07760000e+02, 1.00000000e-01, 1.28000000e+02, 1.00000000e-01, 1.28100000e+02, 1.00000000e-01]]) Nils From nadavh at visionsense.com Thu Nov 6 12:47:19 2008 From: nadavh at visionsense.com (Nadav Horesh) Date: Thu, 6 Nov 2008 19:47:19 +0200 Subject: [Numpy-discussion] reshape References: Message-ID: <710F2847B0018641891D9A216027636029C2F8@ex3.envision.co.il> Can you clarify? -----????? ??????----- ???: numpy-discussion-bounces at scipy.org ??? Nils Wagner ????: ? 06-??????-08 17:13 ??: numpy-discussion at scipy.org ????: [Numpy-discussion] reshape Hi all, What can be done if the new shape is not compatible with the original shape ? The number of columns is fixed and should be 8. One could split the original array C >>> C array([[ 0.00000000e+00, 1.00000000e-01], [ 4.15000000e+01, 1.00000000e-01], [ 4.16000000e+01, 1.00000000e-01], [ 6.07500000e+01, 1.00000000e-01], [ 6.08500000e+01, 1.00000000e-01], [ 8.22550000e+01, 1.00000000e-01], [ 8.23550000e+01, 1.00000000e-01], [ 9.42550000e+01, 1.00000000e-01], [ 9.43550000e+01, 1.00000000e-01], [ 9.99100000e+01, 1.00000000e-01], [ 1.00010000e+02, 1.00000000e-01], [ 1.07660000e+02, 1.00000000e-01], [ 1.07760000e+02, 1.00000000e-01], [ 1.28000000e+02, 1.00000000e-01], [ 1.28100000e+02, 1.00000000e-01]]) >>> shape(C) (15, 2) >>> A = C[:8,:] >>> A array([[ 0. , 0.1 ], [ 41.5 , 0.1 ], [ 41.6 , 0.1 ], [ 60.75 , 0.1 ], [ 60.85 , 0.1 ], [ 82.255, 0.1 ], [ 82.355, 0.1 ], [ 94.255, 0.1 ]]) >>> A = reshape(A,(2,8)) >>> A array([[ 0. , 0.1 , 41.5 , 0.1 , 41.6 , 0.1 , 60.75 , 0.1 ], [ 60.85 , 0.1 , 82.255, 0.1 , 82.355, 0.1 , 94.255, 0.1 ]]) >>> B = C[8:,:] >>> B array([[ 9.43550000e+01, 1.00000000e-01], [ 9.99100000e+01, 1.00000000e-01], [ 1.00010000e+02, 1.00000000e-01], [ 1.07660000e+02, 1.00000000e-01], [ 1.07760000e+02, 1.00000000e-01], [ 1.28000000e+02, 1.00000000e-01], [ 1.28100000e+02, 1.00000000e-01]]) >>> B = reshape(B,(2,7)) >>> B array([[ 9.43550000e+01, 1.00000000e-01, 9.99100000e+01, 1.00000000e-01, 1.00010000e+02, 1.00000000e-01, 1.07660000e+02], [ 1.00000000e-01, 1.07760000e+02, 1.00000000e-01, 1.28000000e+02, 1.00000000e-01, 1.28100000e+02, 1.00000000e-01]]) Nils _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3191 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Thu Nov 6 13:59:47 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 06 Nov 2008 19:59:47 +0100 Subject: [Numpy-discussion] reshape In-Reply-To: <710F2847B0018641891D9A216027636029C2F8@ex3.envision.co.il> References: <710F2847B0018641891D9A216027636029C2F8@ex3.envision.co.il> Message-ID: On Thu, 6 Nov 2008 19:47:19 +0200 "Nadav Horesh" wrote: > Can you clarify? I have an array with a number of rows (nrows) and two columns. The first column entries correspond to x_i, the second column contains the corresponding values y_i = f(x_i) That array should be written to a file where each line consists 4 pairs (x_i, y_i=f(x_i)) except the last line if nrows/8 is not an integer. HTH, Nils From giovanni.samaey at cs.kuleuven.be Thu Nov 6 14:25:58 2008 From: giovanni.samaey at cs.kuleuven.be (Giovanni Samaey) Date: Thu, 6 Nov 2008 20:25:58 +0100 Subject: [Numpy-discussion] random number generation in python compared to gsl In-Reply-To: <3d375d730811051001r4fc45a1et49996f56ab20bdf4@mail.gmail.com> References: <30E2F219-B27A-4872-8BC7-6194537A6F8C@cs.kuleuven.be> <6D71EFCC-79B0-435B-B731-611C0D6A5EE1@cs.kuleuven.be> <3d375d730811051001r4fc45a1et49996f56ab20bdf4@mail.gmail.com> Message-ID: Dear Robert, indeed, this is the difference ! Thanks ! Seeding numpy with 4357 gives identical sequences... Giovanni On 05 Nov 2008, at 19:01, Robert Kern wrote: > On Wed, Nov 5, 2008 at 08:05, Giovanni Samaey > wrote: >>> >>> Hi, >>> how about other seed values ? I thought seed=0, is (often) used to >>> mean a "random", i.e. current time or alike, seed value ... !? >> >> Not in this case: I always get the same sequence with seed=0 >> (different for both implementation, but the same each time I run it.) >> I got around it by installing pygsl and taking random numbers from >> there instead of from numpy. >> >> But I still find it strange to get two different sequences from two >> implementation that claim to be the same algorithm... > > GSL has this bit of code: > > if (s == 0) > s = 4357; /* the default seed is 4357 */ > > We don't. Otherwise, I believe the two seeding algorithms are > identical. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From nadavh at visionsense.com Thu Nov 6 14:43:30 2008 From: nadavh at visionsense.com (Nadav Horesh) Date: Thu, 6 Nov 2008 21:43:30 +0200 Subject: [Numpy-discussion] reshape References: <710F2847B0018641891D9A216027636029C2F8@ex3.envision.co.il> Message-ID: <710F2847B0018641891D9A216027636029C2FA@ex3.envision.co.il> cc = C.ravel() lines_list = cc[i:i+8] for i in range(1, len(cc), 8)] Nadav -----????? ??????----- ???: numpy-discussion-bounces at scipy.org ??? Nils Wagner ????: ? 06-??????-08 20:59 ??: Discussion of Numerical Python ????: Re: [Numpy-discussion] reshape On Thu, 6 Nov 2008 19:47:19 +0200 "Nadav Horesh" wrote: > Can you clarify? I have an array with a number of rows (nrows) and two columns. The first column entries correspond to x_i, the second column contains the corresponding values y_i = f(x_i) That array should be written to a file where each line consists 4 pairs (x_i, y_i=f(x_i)) except the last line if nrows/8 is not an integer. HTH, Nils _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3143 bytes Desc: not available URL: From nadavh at visionsense.com Thu Nov 6 14:50:52 2008 From: nadavh at visionsense.com (Nadav Horesh) Date: Thu, 6 Nov 2008 21:50:52 +0200 Subject: [Numpy-discussion] reshape References: <710F2847B0018641891D9A216027636029C2F8@ex3.envision.co.il> <710F2847B0018641891D9A216027636029C2FA@ex3.envision.co.il> Message-ID: <710F2847B0018641891D9A216027636029C2FB@ex3.envision.co.il> A correction: lines_list = [cc[i:i+8] for i in range(1, len(cc), 8)] Nadav -----????? ??????----- ???: numpy-discussion-bounces at scipy.org ??? Nadav Horesh ????: ? 06-??????-08 21:43 ??: Discussion of Numerical Python ????: RE: [Numpy-discussion] reshape cc = C.ravel() lines_list = cc[i:i+8] for i in range(1, len(cc), 8)] Nadav -----????? ??????----- ???: numpy-discussion-bounces at scipy.org ??? Nils Wagner ????: ? 06-??????-08 20:59 ??: Discussion of Numerical Python ????: Re: [Numpy-discussion] reshape On Thu, 6 Nov 2008 19:47:19 +0200 "Nadav Horesh" wrote: > Can you clarify? I have an array with a number of rows (nrows) and two columns. The first column entries correspond to x_i, the second column contains the corresponding values y_i = f(x_i) That array should be written to a file where each line consists 4 pairs (x_i, y_i=f(x_i)) except the last line if nrows/8 is not an integer. HTH, Nils _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3199 bytes Desc: not available URL: From barrywark at gmail.com Thu Nov 6 15:05:07 2008 From: barrywark at gmail.com (Barry Wark) Date: Thu, 6 Nov 2008 12:05:07 -0800 Subject: [Numpy-discussion] stability of numpy.random.RandomState API? Message-ID: I'm just about to embark on a long-term research project and was planning to use numpy.random to generate stimuli for our experiments. We plan to store only the parameters and RandomState seed for each stimulus and I'm concerned about stability of the API in the long term: will the parameters and random seed we store now work with future versions of numpy.random? I think I recall that there was a change in the random seed format some time around numpy 1.0. Thanks, Barry From nwagner at iam.uni-stuttgart.de Thu Nov 6 15:07:58 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 06 Nov 2008 21:07:58 +0100 Subject: [Numpy-discussion] reshape In-Reply-To: <710F2847B0018641891D9A216027636029C2FB@ex3.envision.co.il> References: <710F2847B0018641891D9A216027636029C2F8@ex3.envision.co.il> <710F2847B0018641891D9A216027636029C2FA@ex3.envision.co.il> <710F2847B0018641891D9A216027636029C2FB@ex3.envision.co.il> Message-ID: On Thu, 6 Nov 2008 21:50:52 +0200 "Nadav Horesh" wrote: > A correction: > > lines_list = [cc[i:i+8] for i in range(1, len(cc), 8)] > > Nadav Hi Nadav, Thank you very much. My next question; How can I save lines_list to a file with the following so called small field format (NASTRAN). Each row consists of ten fields of eight characters each. Field 1 should be empty in my application. Field 10 is used only for optional continuation information when applicable. 12345678123456781234567812345678123456781234567812345678123456781234567812345678 xxxxxxxxyyyyyyyyzzzzzzzz........ Cheers, Nils From robert.kern at gmail.com Thu Nov 6 15:09:19 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 6 Nov 2008 14:09:19 -0600 Subject: [Numpy-discussion] stability of numpy.random.RandomState API? In-Reply-To: References: Message-ID: <3d375d730811061209h7149db66w747a653881ba249d@mail.gmail.com> On Thu, Nov 6, 2008 at 14:05, Barry Wark wrote: > I'm just about to embark on a long-term research project and was > planning to use numpy.random to generate stimuli for our experiments. > We plan to store only the parameters and RandomState seed for each > stimulus and I'm concerned about stability of the API in the long > term: will the parameters and random seed we store now work with > future versions of numpy.random? It should. But just in case, make sure you explicitly instantiate RandomState objects instead of using the functions in numpy.random. That way, should we need to fix some bug that might change the results, you can always pull out the current mtrand code and use it independently. > I think I recall that there was a > change in the random seed format some time around numpy 1.0. I don't think I changed it after 1.0. Before 1.0, we explicitly warned people about API instability. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tjhnson at gmail.com Thu Nov 6 15:23:24 2008 From: tjhnson at gmail.com (T J) Date: Thu, 6 Nov 2008 12:23:24 -0800 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: <9457e7c80811051341r5ba8b327id881841f196b67ac@mail.gmail.com> Message-ID: On Wed, Nov 5, 2008 at 2:09 PM, Charles R Harris wrote: > I'm inclined to go with logaddexp and add logsumexp as an alias for > logaddexp.reduce. But I'll wait until tomorrow to see if there are more > comments. When working in other bases, it seems like it would be good to avoid having to convert to base e and then back to base 2 with each function call (for example). Is there any desire add similar functions for other standard bases? logaddexp2 logaddexp10 logdotexp2 logdotexp10 From barrywark at gmail.com Thu Nov 6 16:12:34 2008 From: barrywark at gmail.com (Barry Wark) Date: Thu, 6 Nov 2008 13:12:34 -0800 Subject: [Numpy-discussion] stability of numpy.random.RandomState API? In-Reply-To: <3d375d730811061209h7149db66w747a653881ba249d@mail.gmail.com> References: <3d375d730811061209h7149db66w747a653881ba249d@mail.gmail.com> Message-ID: On Thu, Nov 6, 2008 at 12:09 PM, Robert Kern wrote: > On Thu, Nov 6, 2008 at 14:05, Barry Wark wrote: >> I'm just about to embark on a long-term research project and was >> planning to use numpy.random to generate stimuli for our experiments. >> We plan to store only the parameters and RandomState seed for each >> stimulus and I'm concerned about stability of the API in the long >> term: will the parameters and random seed we store now work with >> future versions of numpy.random? > > It should. But just in case, make sure you explicitly instantiate > RandomState objects instead of using the functions in numpy.random. > That way, should we need to fix some bug that might change the > results, you can always pull out the current mtrand code and use it > independently. That is our working plan, as well as to record the numpy.__version__ which was used to generate the original stimulus. Thanks for the confirmation. On a side note, this seems like a potentially big issue for many scientific users. Perhaps making a policy of keeping incompatible revisions to RandomState noted in its documentation (if they ever come up) would be useful. Even better, a module function or class method that returns an instance of RandomState as it was at a particular numpy version: r = numpy.random.RandomState.from_version(my_numpy_version, seed=None) Hmm. Sounds like a bit of work. I'll give it a go, if you think this is a valuable approach. > >> I think I recall that there was a >> change in the random seed format some time around numpy 1.0. > > I don't think I changed it after 1.0. Before 1.0, we explicitly warned > people about API instability. I believe you. We've been developing this app since before numpy 1.0, so I'm sure the issue cropped up from data generated pre-1.0. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From charlesr.harris at gmail.com Thu Nov 6 16:48:49 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 6 Nov 2008 14:48:49 -0700 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: <9457e7c80811051341r5ba8b327id881841f196b67ac@mail.gmail.com> Message-ID: On Thu, Nov 6, 2008 at 1:23 PM, T J wrote: > On Wed, Nov 5, 2008 at 2:09 PM, Charles R Harris > wrote: > > I'm inclined to go with logaddexp and add logsumexp as an alias for > > logaddexp.reduce. But I'll wait until tomorrow to see if there are more > > comments. > > When working in other bases, it seems like it would be good to avoid > having to convert to base e and then back to base 2 with each function > call (for example). Is there any desire add similar functions for > other standard bases? > I suppose that depends on who you ask ;) What is your particular interest in these other bases and why would they be better than working in base e and converting at the end? The only one I could see really having a fast implementation is log2. In fact, I think the standard log starts in log2 by pulling in the floating point exponent and then using some sort of rational approximation of log2 over the range [1,2) on the mantissa. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Nov 6 16:55:20 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 6 Nov 2008 15:55:20 -0600 Subject: [Numpy-discussion] stability of numpy.random.RandomState API? In-Reply-To: References: <3d375d730811061209h7149db66w747a653881ba249d@mail.gmail.com> Message-ID: <3d375d730811061355w7ceea439k526fcdc14b3fec40@mail.gmail.com> On Thu, Nov 6, 2008 at 15:12, Barry Wark wrote: > On Thu, Nov 6, 2008 at 12:09 PM, Robert Kern wrote: >> On Thu, Nov 6, 2008 at 14:05, Barry Wark wrote: >>> I'm just about to embark on a long-term research project and was >>> planning to use numpy.random to generate stimuli for our experiments. >>> We plan to store only the parameters and RandomState seed for each >>> stimulus and I'm concerned about stability of the API in the long >>> term: will the parameters and random seed we store now work with >>> future versions of numpy.random? >> >> It should. But just in case, make sure you explicitly instantiate >> RandomState objects instead of using the functions in numpy.random. >> That way, should we need to fix some bug that might change the >> results, you can always pull out the current mtrand code and use it >> independently. > > That is our working plan, as well as to record the numpy.__version__ > which was used to generate the original stimulus. Thanks for the > confirmation. > > On a side note, this seems like a potentially big issue for many > scientific users. Perhaps making a policy of keeping incompatible > revisions to RandomState noted in its documentation (if they ever > come up) would be useful. Even better, a module function or class > method that returns an instance of RandomState as it was at a > particular numpy version: > > r = numpy.random.RandomState.from_version(my_numpy_version, seed=None) > > Hmm. Sounds like a bit of work. I'll give it a go, if you think this > is a valuable approach. > >> >>> I think I recall that there was a >>> change in the random seed format some time around numpy 1.0. >> >> I don't think I changed it after 1.0. Before 1.0, we explicitly warned >> people about API instability. > > I believe you. We've been developing this app since before numpy 1.0, > so I'm sure the issue cropped up from data generated pre-1.0. Okay. Actually, now that I think about it, there have been changes that would affect results using the nonuniform distributions. These should only have arisen from fixing bugs (i.e. the previous results were wrong, not just different). Do you have any thoughts on how you would want us to handle that case? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tjhnson at gmail.com Thu Nov 6 17:17:37 2008 From: tjhnson at gmail.com (T J) Date: Thu, 6 Nov 2008 14:17:37 -0800 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: <9457e7c80811051341r5ba8b327id881841f196b67ac@mail.gmail.com> Message-ID: On Thu, Nov 6, 2008 at 1:48 PM, Charles R Harris wrote: > What is your particular interest in these other bases and why would > they be better than working in base e and converting at the end? The interest is in information theory, where quantities are (standardly) represented in bits. So log2 quantities are often stored by the user and then passed into functions or classes. The main reason I'd like to shy away from conversions is that I also make use of generators/iterators and having next() convert to bits before each yield is not ideal (as these things are often slow enough and will be called many times). >The only one I could see really having a fast implementation is log2. No disagreement here :) From tjhnson at gmail.com Thu Nov 6 17:23:22 2008 From: tjhnson at gmail.com (T J) Date: Thu, 6 Nov 2008 14:23:22 -0800 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: <9457e7c80811051341r5ba8b327id881841f196b67ac@mail.gmail.com> Message-ID: On Thu, Nov 6, 2008 at 2:17 PM, T J wrote: > > The interest is in information theory, where quantities are > (standardly) represented in bits. I think this is also true in the machine learning community. From charlesr.harris at gmail.com Thu Nov 6 17:36:25 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 6 Nov 2008 15:36:25 -0700 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: <9457e7c80811051341r5ba8b327id881841f196b67ac@mail.gmail.com> Message-ID: On Thu, Nov 6, 2008 at 3:17 PM, T J wrote: > On Thu, Nov 6, 2008 at 1:48 PM, Charles R Harris > wrote: > > What is your particular interest in these other bases and why would > > they be better than working in base e and converting at the end? > > The interest is in information theory, where quantities are > (standardly) represented in bits. So log2 quantities are often stored > by the user and then passed into functions or classes. The main > reason I'd like to shy away from conversions is that I also make use > of generators/iterators and having next() convert to bits before each > yield is not ideal (as these things are often slow enough and will be > called many times). > I could add exp2, log2, and logaddexp2 pretty easily. Almost too easily, I don't want to clutter up numpy with a lot of functions. However, if there is a community for these functions I will put them in. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrywark at gmail.com Thu Nov 6 17:58:46 2008 From: barrywark at gmail.com (Barry Wark) Date: Thu, 6 Nov 2008 14:58:46 -0800 Subject: [Numpy-discussion] stability of numpy.random.RandomState API? In-Reply-To: <3d375d730811061355w7ceea439k526fcdc14b3fec40@mail.gmail.com> References: <3d375d730811061209h7149db66w747a653881ba249d@mail.gmail.com> <3d375d730811061355w7ceea439k526fcdc14b3fec40@mail.gmail.com> Message-ID: On Thu, Nov 6, 2008 at 1:55 PM, Robert Kern wrote: > On Thu, Nov 6, 2008 at 15:12, Barry Wark wrote: >> On Thu, Nov 6, 2008 at 12:09 PM, Robert Kern wrote: >>> On Thu, Nov 6, 2008 at 14:05, Barry Wark wrote: >>>> I'm just about to embark on a long-term research project and was >>>> planning to use numpy.random to generate stimuli for our experiments. >>>> We plan to store only the parameters and RandomState seed for each >>>> stimulus and I'm concerned about stability of the API in the long >>>> term: will the parameters and random seed we store now work with >>>> future versions of numpy.random? >>> >>> It should. But just in case, make sure you explicitly instantiate >>> RandomState objects instead of using the functions in numpy.random. >>> That way, should we need to fix some bug that might change the >>> results, you can always pull out the current mtrand code and use it >>> independently. >> >> That is our working plan, as well as to record the numpy.__version__ >> which was used to generate the original stimulus. Thanks for the >> confirmation. >> >> On a side note, this seems like a potentially big issue for many >> scientific users. Perhaps making a policy of keeping incompatible >> revisions to RandomState noted in its documentation (if they ever >> come up) would be useful. Even better, a module function or class >> method that returns an instance of RandomState as it was at a >> particular numpy version: >> >> r = numpy.random.RandomState.from_version(my_numpy_version, seed=None) >> >> Hmm. Sounds like a bit of work. I'll give it a go, if you think this >> is a valuable approach. >> >>> >>>> I think I recall that there was a >>>> change in the random seed format some time around numpy 1.0. >>> >>> I don't think I changed it after 1.0. Before 1.0, we explicitly warned >>> people about API instability. >> >> I believe you. We've been developing this app since before numpy 1.0, >> so I'm sure the issue cropped up from data generated pre-1.0. > > Okay. Actually, now that I think about it, there have been changes > that would affect results using the nonuniform distributions. These > should only have arisen from fixing bugs (i.e. the previous results > were wrong, not just different). Do you have any thoughts on how you > would want us to handle that case? In our usage (neural physiology), we've recorded the physiological response to a given stimulus. So being able to recover the _exact_ original stimulus that produced the recorded data is critical. This is why I suggested an API which would let us get an instance of the RandomState as it was at a particular revision (including bugs) so that we could regenerate the exact original sequence. Obviously, we're happy to have the bug fixes in, and continue to use the current RandomState for new experiments. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From tjhnson at gmail.com Thu Nov 6 18:01:58 2008 From: tjhnson at gmail.com (T J) Date: Thu, 6 Nov 2008 15:01:58 -0800 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: <9457e7c80811051341r5ba8b327id881841f196b67ac@mail.gmail.com> Message-ID: On Thu, Nov 6, 2008 at 2:36 PM, Charles R Harris wrote: > I could add exp2, log2, and logaddexp2 pretty easily. Almost too easily, I > don't want to clutter up numpy with a lot of functions. However, if there is > a community for these functions I will put them in. > I worry about clutter as well. Note that scipy provides log2 and exp2 already (scipy.special). So I think only logaddexp2 would be needed and (eventually) logdotexp2. Maybe scipy.special is a better place than in numpy? Then perhaps the clutter could be avoided....though I'm probably not the best one to ask for advice on this. I will definitely use the functions and I suspect many others will as well---where ever they are placed. From robert.kern at gmail.com Thu Nov 6 18:30:02 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 6 Nov 2008 17:30:02 -0600 Subject: [Numpy-discussion] stability of numpy.random.RandomState API? In-Reply-To: References: <3d375d730811061209h7149db66w747a653881ba249d@mail.gmail.com> <3d375d730811061355w7ceea439k526fcdc14b3fec40@mail.gmail.com> Message-ID: <3d375d730811061530s757cffb1sa288709e2d899c6d@mail.gmail.com> On Thu, Nov 6, 2008 at 16:58, Barry Wark wrote: > On Thu, Nov 6, 2008 at 1:55 PM, Robert Kern wrote: >> On Thu, Nov 6, 2008 at 15:12, Barry Wark wrote: >>> On Thu, Nov 6, 2008 at 12:09 PM, Robert Kern wrote: >>>> On Thu, Nov 6, 2008 at 14:05, Barry Wark wrote: >>>>> I'm just about to embark on a long-term research project and was >>>>> planning to use numpy.random to generate stimuli for our experiments. >>>>> We plan to store only the parameters and RandomState seed for each >>>>> stimulus and I'm concerned about stability of the API in the long >>>>> term: will the parameters and random seed we store now work with >>>>> future versions of numpy.random? >>>> >>>> It should. But just in case, make sure you explicitly instantiate >>>> RandomState objects instead of using the functions in numpy.random. >>>> That way, should we need to fix some bug that might change the >>>> results, you can always pull out the current mtrand code and use it >>>> independently. >>> >>> That is our working plan, as well as to record the numpy.__version__ >>> which was used to generate the original stimulus. Thanks for the >>> confirmation. >>> >>> On a side note, this seems like a potentially big issue for many >>> scientific users. Perhaps making a policy of keeping incompatible >>> revisions to RandomState noted in its documentation (if they ever >>> come up) would be useful. Even better, a module function or class >>> method that returns an instance of RandomState as it was at a >>> particular numpy version: >>> >>> r = numpy.random.RandomState.from_version(my_numpy_version, seed=None) >>> >>> Hmm. Sounds like a bit of work. I'll give it a go, if you think this >>> is a valuable approach. >>> >>>> >>>>> I think I recall that there was a >>>>> change in the random seed format some time around numpy 1.0. >>>> >>>> I don't think I changed it after 1.0. Before 1.0, we explicitly warned >>>> people about API instability. >>> >>> I believe you. We've been developing this app since before numpy 1.0, >>> so I'm sure the issue cropped up from data generated pre-1.0. >> >> Okay. Actually, now that I think about it, there have been changes >> that would affect results using the nonuniform distributions. These >> should only have arisen from fixing bugs (i.e. the previous results >> were wrong, not just different). Do you have any thoughts on how you >> would want us to handle that case? > > In our usage (neural physiology), we've recorded the physiological > response to a given stimulus. So being able to recover the _exact_ > original stimulus that produced the recorded data is critical. This is > why I suggested an API which would let us get an instance of the > RandomState as it was at a particular revision (including bugs) so > that we could regenerate the exact original sequence. Obviously, we're > happy to have the bug fixes in, and continue to use the current > RandomState for new experiments. How big are these stimuli? I'd just store them in an HDF file. Some of those bugs were 32-bit/64-bit differences. Just because your future self can get the same-versioned source code doesn't mean that the results you get will be identical. I think that the versioned API would be a lot of work for a false sense of security. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Thu Nov 6 18:41:40 2008 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 06 Nov 2008 15:41:40 -0800 Subject: [Numpy-discussion] stability of numpy.random.RandomState API? In-Reply-To: References: <3d375d730811061209h7149db66w747a653881ba249d@mail.gmail.com> <3d375d730811061355w7ceea439k526fcdc14b3fec40@mail.gmail.com> Message-ID: <491380B4.3040802@noaa.gov> Barry Wark wrote: > In our usage (neural physiology), we've recorded the physiological > response to a given stimulus. So being able to recover the _exact_ > original stimulus that produced the recorded data is critical. I'd be inclined to say that if you really want the exact same string of psuedo-random numbers years into the future, you'd probably be safest to generate a huge string now and save it, rather than expecting a future version of a library, perhaps running on different hardware, etc, to re-generate the exact same thing in the future. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From amcmorl at gmail.com Thu Nov 6 22:54:32 2008 From: amcmorl at gmail.com (Angus McMorland) Date: Thu, 6 Nov 2008 22:54:32 -0500 Subject: [Numpy-discussion] import 16-bit tiff - byte-order problem? Message-ID: Hi all, I'm trying to import a 16-bit tiff image into a numpy array. I have found, using google, suggestions to do the following: After starting with: i = Image.open('16bitGreyscaleImage.tif') St?fan van der Walt suggested: a = np.array(i.getdata()).reshape(i.size) # a 1d numpy array and adapted from Nadav Horesh's suggestion: a = np.fromstring(i.tostring(), dtype=np.uint16).reshape(256, 256) Both give me the same answer as: a = np.array(i, dtype=np.uint16) In all cases it looks like the resulting byte order is wrong: pixels with 0 values correctly are 0 in a, in the correct places, but all non-zero values are wrong compared to the same image opened in ImageJ (in which the image looks correct). What's the conversion magic I need to invoke to correctly intepret this image type? Thanks, Angus. -- AJC McMorland Post-doctoral research fellow Neurobiology, University of Pittsburgh From robert.kern at gmail.com Thu Nov 6 22:58:42 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 6 Nov 2008 21:58:42 -0600 Subject: [Numpy-discussion] import 16-bit tiff - byte-order problem? In-Reply-To: References: Message-ID: <3d375d730811061958j4069cbb0h20e12f02e48fc860@mail.gmail.com> On Thu, Nov 6, 2008 at 21:54, Angus McMorland wrote: > Hi all, > > I'm trying to import a 16-bit tiff image into a numpy array. I have > found, using google, suggestions to do the following: > > After starting with: > i = Image.open('16bitGreyscaleImage.tif') > > St?fan van der Walt suggested: > a = np.array(i.getdata()).reshape(i.size) # a 1d numpy array > > and adapted from Nadav Horesh's suggestion: > a = np.fromstring(i.tostring(), dtype=np.uint16).reshape(256, 256) > > Both give me the same answer as: > a = np.array(i, dtype=np.uint16) This is the preferred way to do it these days. > In all cases it looks like the resulting byte order is wrong: pixels > with 0 values correctly are 0 in a, in the correct places, but all > non-zero values are wrong compared to the same image opened in ImageJ > (in which the image looks correct). > > What's the conversion magic I need to invoke to correctly intepret > this image type? Hmm, it is possible that the __array_interface__ is giving you the wrong endian information. Anyways, use a.byteswap() to get a view of the array with the other endianness. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From amcmorl at gmail.com Thu Nov 6 23:15:14 2008 From: amcmorl at gmail.com (Angus McMorland) Date: Thu, 6 Nov 2008 23:15:14 -0500 Subject: [Numpy-discussion] import 16-bit tiff - byte-order problem? In-Reply-To: <3d375d730811061958j4069cbb0h20e12f02e48fc860@mail.gmail.com> References: <3d375d730811061958j4069cbb0h20e12f02e48fc860@mail.gmail.com> Message-ID: 2008/11/6 Robert Kern : > On Thu, Nov 6, 2008 at 21:54, Angus McMorland wrote: >> Hi all, >> >> I'm trying to import a 16-bit tiff image into a numpy array. I have >> found, using google, suggestions to do the following: >> >> After starting with: >> i = Image.open('16bitGreyscaleImage.tif') >> >> St?fan van der Walt suggested: >> a = np.array(i.getdata()).reshape(i.size) # a 1d numpy array >> >> and adapted from Nadav Horesh's suggestion: >> a = np.fromstring(i.tostring(), dtype=np.uint16).reshape(256, 256) >> >> Both give me the same answer as: >> a = np.array(i, dtype=np.uint16) > > This is the preferred way to do it these days. > >> In all cases it looks like the resulting byte order is wrong: pixels >> with 0 values correctly are 0 in a, in the correct places, but all >> non-zero values are wrong compared to the same image opened in ImageJ >> (in which the image looks correct). >> >> What's the conversion magic I need to invoke to correctly intepret >> this image type? > > Hmm, it is possible that the __array_interface__ is giving you the > wrong endian information. Anyways, use a.byteswap() to get a view of > the array with the other endianness. Many thanks Robert, that did the trick. I looked through the new numpy reference for something like the byteswap function, but I see now it's less obvious since it's only an array instance method and not a function, for which the documentation is more systematic. Should there be a change made somewhere to document or accommodate the other endianness of the images? Angus. > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion -- AJC McMorland Post-doctoral research fellow Neurobiology, University of Pittsburgh From f.yw at hotmail.com Fri Nov 7 01:25:43 2008 From: f.yw at hotmail.com (frank wang) Date: Thu, 6 Nov 2008 23:25:43 -0700 Subject: [Numpy-discussion] signal processing filter operation in numpy In-Reply-To: References: <3d375d730811061958j4069cbb0h20e12f02e48fc860@mail.gmail.com> Message-ID: Hi, I need to perform iir filter operation using numpy and could not googled any useful info for this. Are there filter operation similar the matlab filter funciton in Numpy? Thanks Frank _________________________________________________________________ Color coding for safety: Windows Live Hotmail alerts you to suspicious email. http://windowslive.com/Explore/Hotmail?ocid=TXT_TAGLM_WL_hotmail_acq_safety_112008 -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Nov 7 01:29:46 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 7 Nov 2008 00:29:46 -0600 Subject: [Numpy-discussion] signal processing filter operation in numpy In-Reply-To: References: <3d375d730811061958j4069cbb0h20e12f02e48fc860@mail.gmail.com> Message-ID: <3d375d730811062229r57b1987bvdcf3e51d19a72c22@mail.gmail.com> On Fri, Nov 7, 2008 at 00:25, frank wang wrote: > Hi, > > I need to perform iir filter operation using numpy and could not googled any > useful info for this. Are there filter operation similar the matlab filter > funciton in Numpy? Not in numpy. scipy.signal.lfilter() does, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Fri Nov 7 02:48:50 2008 From: cournape at gmail.com (David Cournapeau) Date: Fri, 7 Nov 2008 16:48:50 +0900 Subject: [Numpy-discussion] atlas not found, why? In-Reply-To: References: Message-ID: <5b8d13220811062348w65214618i9b7e4a29ac0faa4b@mail.gmail.com> On Tue, Nov 4, 2008 at 7:59 AM, T J wrote: > > So can someone explain why I *must* define ATLAS. I tried a number of > variations on site.cfg and could not get numpy to find atlas with any > of them. Ok, I took a brief look at this: I forgot that Ubuntu and Debian added an aditional library suffix to libraries depending on gfortran ABI. I added support for this in numpy.distutils - which was looking for libraries explicitely; could you retry *without* a site.cfg ? It should work, now, David From dwf at cs.toronto.edu Fri Nov 7 04:23:19 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 7 Nov 2008 04:23:19 -0500 Subject: [Numpy-discussion] import 16-bit tiff - byte-order problem? In-Reply-To: References: <3d375d730811061958j4069cbb0h20e12f02e48fc860@mail.gmail.com> Message-ID: <3F70E207-0212-49FA-A884-76160AE9DD81@cs.toronto.edu> On 6-Nov-08, at 11:15 PM, Angus McMorland wrote: > 2008/11/6 Robert Kern : >> On Thu, Nov 6, 2008 at 21:54, Angus McMorland >> wrote: >>> Hi all, >>> >>> I'm trying to import a 16-bit tiff image into a numpy array. I have >>> found, using google, suggestions to do the following: >>> >>> After starting with: >>> i = Image.open('16bitGreyscaleImage.tif') >>> >>> St?fan van der Walt suggested: >>> a = np.array(i.getdata()).reshape(i.size) # a 1d numpy array As an aside, if you have matplotlib installed, you might be able to sidestep this problem completely with matplotlib.image.pil_to_array. Cheers, David From david at ar.media.kyoto-u.ac.jp Fri Nov 7 04:26:02 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 07 Nov 2008 18:26:02 +0900 Subject: [Numpy-discussion] atlas not found, why? In-Reply-To: <5b8d13220811062348w65214618i9b7e4a29ac0faa4b@mail.gmail.com> References: <5b8d13220811062348w65214618i9b7e4a29ac0faa4b@mail.gmail.com> Message-ID: <491409AA.1000802@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > > Ok, I took a brief look at this: I forgot that Ubuntu and Debian added > an aditional library suffix to libraries depending on gfortran ABI. I > added support for this in numpy.distutils - which was looking for > libraries explicitely; could you retry *without* a site.cfg ? It > should work, now, > And it won't, because I am afraid the Ubuntu atlas package is broken in 8.10... They use the gfortran ABI, but they built the fortran wrappers with g77, according to ATL_buildinfo. https://bugs.launchpad.net/ubuntu/+source/atlas/+bug/295051 I would strongly advise using atlas on Ubuntu 8.10 until this bug is solved: it means any numpy/scipy code using linear algebra is potentially broken (segfault, wrong results). cheers, David From tjhnson at gmail.com Fri Nov 7 04:58:32 2008 From: tjhnson at gmail.com (T J) Date: Fri, 7 Nov 2008 01:58:32 -0800 Subject: [Numpy-discussion] atlas not found, why? In-Reply-To: <491409AA.1000802@ar.media.kyoto-u.ac.jp> References: <5b8d13220811062348w65214618i9b7e4a29ac0faa4b@mail.gmail.com> <491409AA.1000802@ar.media.kyoto-u.ac.jp> Message-ID: On Fri, Nov 7, 2008 at 1:26 AM, David Cournapeau wrote: > David Cournapeau wrote: >> >> Ok, I took a brief look at this: I forgot that Ubuntu and Debian added >> an aditional library suffix to libraries depending on gfortran ABI. I >> added support for this in numpy.distutils - which was looking for >> libraries explicitly; could you retry *without* a site.cfg ? It >> should work, now, >> > > And it won't, because I am afraid the Ubuntu atlas package is broken in > 8.10... They use the gfortran ABI, but they built the fortran wrappers > with g77, according to ATL_buildinfo. > > https://bugs.launchpad.net/ubuntu/+source/atlas/+bug/295051 > > I would strongly advise using atlas on Ubuntu 8.10 until this bug is > solved: it means any numpy/scipy code using linear algebra is > potentially broken (segfault, wrong results). > Intended: '''I would strongly advise *against* using atlas on Ubuntu 8.10.''' :-) That the fortran wrappers were compiled using g77 is also apparent via what is printed out during setup when ATLAS is detected: gcc -pthread _configtest.o -L/usr/lib/atlas -llapack -lblas -o _configtest ATLAS version 3.6.0 built by root on Fri Jan 9 15:57:20 UTC 2004: UNAME : Linux intech67 2.4.20 #1 SMP Fri Jan 10 18:29:51 EST 2003 i686 GNU/Linux INSTFLG : MMDEF : /fix/g/camm/atlas3-3.6.0/CONFIG/ARCHS/P4SSE2/gcc/gemm ARCHDEF : /fix/g/camm/atlas3-3.6.0/CONFIG/ARCHS/P4SSE2/gcc/misc F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 1048576 F77 : /usr/bin/g77, version GNU Fortran (GCC) 3.3.3 20031229 (prerelease) (Debian) F77FLAGS : -fomit-frame-pointer -O CC : /usr/bin/gcc, version gcc (GCC) 3.3.3 20031229 (prerelease) (Debian) CC FLAGS : -fomit-frame-pointer -O3 -funroll-all-loops MCC : /usr/bin/gcc, version gcc (GCC) 3.3.3 20031229 (prerelease) (Debian) MCCFLAGS : -fomit-frame-pointer -O success! And the problem with this, as you've mentioned before, is that g77 and gfortran are not abi compatible. But isn't this issue separate from the autodetection of atlas without a site.cfg? With r5986, atlas is still only detected if I declare ATLAS: $ ATLAS=/usr/lib python setup.py build versus $ unset ATLAS; python setup.py build From tjhnson at gmail.com Fri Nov 7 05:00:01 2008 From: tjhnson at gmail.com (T J) Date: Fri, 7 Nov 2008 02:00:01 -0800 Subject: [Numpy-discussion] atlas not found, why? In-Reply-To: References: <5b8d13220811062348w65214618i9b7e4a29ac0faa4b@mail.gmail.com> <491409AA.1000802@ar.media.kyoto-u.ac.jp> Message-ID: On Fri, Nov 7, 2008 at 1:58 AM, T J wrote: > > That the fortran wrappers were compiled using g77 is also apparent via > what is printed out during setup when ATLAS is detected: > > gcc -pthread _configtest.o -L/usr/lib/atlas -llapack -lblas -o _configtest > ATLAS version 3.6.0 built by root on Fri Jan 9 15:57:20 UTC 2004: > UNAME : Linux intech67 2.4.20 #1 SMP Fri Jan 10 18:29:51 EST > 2003 i686 GNU/Linux > INSTFLG : > MMDEF : /fix/g/camm/atlas3-3.6.0/CONFIG/ARCHS/P4SSE2/gcc/gemm > ARCHDEF : /fix/g/camm/atlas3-3.6.0/CONFIG/ARCHS/P4SSE2/gcc/misc > F2CDEFS : -DAdd__ -DStringSunStyle > CACHEEDGE: 1048576 > F77 : /usr/bin/g77, version GNU Fortran (GCC) 3.3.3 20031229 > (prerelease) (Debian) > F77FLAGS : -fomit-frame-pointer -O > CC : /usr/bin/gcc, version gcc (GCC) 3.3.3 20031229 > (prerelease) (Debian) > CC FLAGS : -fomit-frame-pointer -O3 -funroll-all-loops > MCC : /usr/bin/gcc, version gcc (GCC) 3.3.3 20031229 > (prerelease) (Debian) > MCCFLAGS : -fomit-frame-pointer -O > success! > That was intended to be a question. From david at ar.media.kyoto-u.ac.jp Fri Nov 7 04:48:12 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 07 Nov 2008 18:48:12 +0900 Subject: [Numpy-discussion] atlas not found, why? In-Reply-To: References: <5b8d13220811062348w65214618i9b7e4a29ac0faa4b@mail.gmail.com> <491409AA.1000802@ar.media.kyoto-u.ac.jp> Message-ID: <49140EDC.7010401@ar.media.kyoto-u.ac.jp> T J wrote: > With r5986, atlas is > still only detected if I declare ATLAS: > > $ ATLAS=/usr/lib python setup.py build > > versus > > $ unset ATLAS; python setup.py build > It works for me on Intrepid (64 bits). Did you install libatlas3gf-base-dev ? (the names changed in intrepid). cheers, David From david at ar.media.kyoto-u.ac.jp Fri Nov 7 04:49:03 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 07 Nov 2008 18:49:03 +0900 Subject: [Numpy-discussion] atlas not found, why? In-Reply-To: References: <5b8d13220811062348w65214618i9b7e4a29ac0faa4b@mail.gmail.com> <491409AA.1000802@ar.media.kyoto-u.ac.jp> Message-ID: <49140F0F.9000504@ar.media.kyoto-u.ac.jp> T J wrote: > On Fri, Nov 7, 2008 at 1:58 AM, T J wrote: > >> That the fortran wrappers were compiled using g77 is also apparent via >> what is printed out during setup when ATLAS is detected: >> >> gcc -pthread _configtest.o -L/usr/lib/atlas -llapack -lblas -o _configtest >> ATLAS version 3.6.0 built by root on Fri Jan 9 15:57:20 UTC 2004: >> UNAME : Linux intech67 2.4.20 #1 SMP Fri Jan 10 18:29:51 EST >> 2003 i686 GNU/Linux >> INSTFLG : >> MMDEF : /fix/g/camm/atlas3-3.6.0/CONFIG/ARCHS/P4SSE2/gcc/gemm >> ARCHDEF : /fix/g/camm/atlas3-3.6.0/CONFIG/ARCHS/P4SSE2/gcc/misc >> F2CDEFS : -DAdd__ -DStringSunStyle >> CACHEEDGE: 1048576 >> F77 : /usr/bin/g77, version GNU Fortran (GCC) 3.3.3 20031229 >> (prerelease) (Debian) >> F77FLAGS : -fomit-frame-pointer -O >> CC : /usr/bin/gcc, version gcc (GCC) 3.3.3 20031229 >> (prerelease) (Debian) >> CC FLAGS : -fomit-frame-pointer -O3 -funroll-all-loops >> MCC : /usr/bin/gcc, version gcc (GCC) 3.3.3 20031229 >> (prerelease) (Debian) >> MCCFLAGS : -fomit-frame-pointer -O >> success! >> >> > > That was intended to be a question. > Yes, this output is generated by a call to atlas function ATL_buildinfo, the exact same I used for the bug report, David From tjhnson at gmail.com Fri Nov 7 05:21:36 2008 From: tjhnson at gmail.com (T J) Date: Fri, 7 Nov 2008 02:21:36 -0800 Subject: [Numpy-discussion] atlas not found, why? In-Reply-To: <49140EDC.7010401@ar.media.kyoto-u.ac.jp> References: <5b8d13220811062348w65214618i9b7e4a29ac0faa4b@mail.gmail.com> <491409AA.1000802@ar.media.kyoto-u.ac.jp> <49140EDC.7010401@ar.media.kyoto-u.ac.jp> Message-ID: On Fri, Nov 7, 2008 at 1:48 AM, David Cournapeau wrote: > > It works for me on Intrepid (64 bits). Did you install > libatlas3gf-base-dev ? (the names changed in intrepid). > I fear I am overlooking something obvious. $ sudo aptitude search libatlas p libatlas-3dnow-dev - Automatically Tuned Linear Algebra Software,3dnow static v libatlas-3gf.so - i libatlas-base-dev - Automatically Tuned Linear Algebra Software,generic static p libatlas-cpp-0.6-1 - The protocol library of the World Forge project - runtime libs p libatlas-cpp-0.6-1-dbg - The protocol library of the World Forge project - debugging libs p libatlas-cpp-0.6-dev - The protocol library of the World Forge project - header files p libatlas-cpp-doc - The protocol library of the World Forge project - documentation p libatlas-doc - Automatically Tuned Linear Algebra Software,documentation i A libatlas-headers - Automatically Tuned Linear Algebra Software,C header files p libatlas-sse-dev - Automatically Tuned Linear Algebra Software,SSE1 static i libatlas-sse2-dev - Automatically Tuned Linear Algebra Software,SSE2 static p libatlas-test - Automatically Tuned Linear Algebra Software,test programs v libatlas.so.3 - v libatlas.so.3gf - p libatlas3gf-3dnow - Automatically Tuned Linear Algebra Software,3dnow shared i A libatlas3gf-base - Automatically Tuned Linear Algebra Software,generic shared p libatlas3gf-sse - Automatically Tuned Linear Algebra Software,SSE1 shared i libatlas3gf-sse2 - Automatically Tuned Linear Algebra Software,SSE2 shared It looks like I have the important ones: libatlas-base-dev libatlas-headers libatlas-sse2-dev libatlas3gf-base libatlas3gf-sse2 But I don't see libatlas3gf-base-dev anywhere. Is this on a special repository? My sources.list is: # Intrepid Final Release Repository deb http://archive.ubuntu.com/ubuntu intrepid main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu intrepid main restricted universe multiverse #Added by software-properties # Intrepid Security Updates deb http://archive.ubuntu.com/ubuntu intrepid-security main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu intrepid-security main restricted universe multiverse #Added by software-properties # Intrepid Bugfix Updates deb http://archive.ubuntu.com/ubuntu intrepid-updates main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu intrepid-updates main restricted universe multiverse #Added by software-properties I suppose it is possible that something is messed up on my end since I upgraded from 8.04. From david at ar.media.kyoto-u.ac.jp Fri Nov 7 05:16:52 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 07 Nov 2008 19:16:52 +0900 Subject: [Numpy-discussion] atlas not found, why? In-Reply-To: References: <5b8d13220811062348w65214618i9b7e4a29ac0faa4b@mail.gmail.com> <491409AA.1000802@ar.media.kyoto-u.ac.jp> <49140EDC.7010401@ar.media.kyoto-u.ac.jp> Message-ID: <49141594.8000909@ar.media.kyoto-u.ac.jp> T J wrote: > > I fear I am overlooking something obvious. > No, I mixed up the names, I meant libatlas-base-dev. > It looks like I have the important ones: > libatlas-base-dev > libatlas-headers > libatlas-sse2-dev > libatlas3gf-base > libatlas3gf-sse2 > I have only libatlas-headers, libatlas-base-dev and libatlas3gf-base installed. This is strange. You are building numpy in the top source tree, right ? And you have no site.cfg at all ? cheers, David From tjhnson at gmail.com Fri Nov 7 06:10:35 2008 From: tjhnson at gmail.com (T J) Date: Fri, 7 Nov 2008 03:10:35 -0800 Subject: [Numpy-discussion] atlas not found, why? In-Reply-To: <49141594.8000909@ar.media.kyoto-u.ac.jp> References: <5b8d13220811062348w65214618i9b7e4a29ac0faa4b@mail.gmail.com> <491409AA.1000802@ar.media.kyoto-u.ac.jp> <49140EDC.7010401@ar.media.kyoto-u.ac.jp> <49141594.8000909@ar.media.kyoto-u.ac.jp> Message-ID: On Fri, Nov 7, 2008 at 2:16 AM, David Cournapeau wrote: > > And you have no site.cfg at all ? > Wow. I was too focused on the current directory and didn't realize I had an old site.cfg in ~/. Two points: 1) Others (myself included) might catch such silliness sooner if the location of the cfg file were printed to screen during setup.py. As of now, I get: Running from numpy source directory. non-existing path in 'numpy/distutils': 'site.cfg' F2PY Version 2_5986 ... 2) The current system_info.py says: """ The file 'site.cfg' is looked for in 1) Directory of main setup.py file being run. 2) Home directory of user running the setup.py file as ~/.numpy-site.cfg 3) System wide directory (location of this file...) """ Doesn't this mean that it should *not* have picked up my ~/site.cfg? Anyway, I can now report that ATLAS is detected without defining any environment variables. Thanks for all the help! From cournape at gmail.com Fri Nov 7 06:23:35 2008 From: cournape at gmail.com (David Cournapeau) Date: Fri, 7 Nov 2008 20:23:35 +0900 Subject: [Numpy-discussion] atlas not found, why? In-Reply-To: References: <5b8d13220811062348w65214618i9b7e4a29ac0faa4b@mail.gmail.com> <491409AA.1000802@ar.media.kyoto-u.ac.jp> <49140EDC.7010401@ar.media.kyoto-u.ac.jp> <49141594.8000909@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220811070323k5bb294d2l9069c9452be9761@mail.gmail.com> On Fri, Nov 7, 2008 at 8:10 PM, T J wrote: > On Fri, Nov 7, 2008 at 2:16 AM, David Cournapeau > wrote: >> >> And you have no site.cfg at all ? >> > > Wow. I was too focused on the current directory and didn't realize I > had an old site.cfg in ~/. > > Two points: > > 1) Others (myself included) might catch such silliness sooner if the > location of the cfg file were printed to screen during setup.py. As > of now, I get I agree this is confusing. Our build process generally produces way too much output, and not always useful information. FOr the site.cfg, there are a lot of possibilities, I think the whole process is a bit too smart for its own good, but it is hard to change without breaking someone else workflow. I will take a look at the code to see if it is doable to print the actual site.cfg used if it is indeed used. > > Running from numpy source directory. > non-existing path in 'numpy/distutils': 'site.cfg' > F2PY Version 2_5986 > ... > > 2) The current system_info.py says: > > """ > The file 'site.cfg' is looked for in > > 1) Directory of main setup.py file being run. > 2) Home directory of user running the setup.py file as ~/.numpy-site.cfg > 3) System wide directory (location of this file...) > """ > > Doesn't this mean that it should *not* have picked up my ~/site.cfg? Yes, it seems. To be honest, I never used this feature, I am surprised myself it picked up $HOME/site.cfg. > > > Anyway, I can now report that ATLAS is detected without defining any > environment variables. This the fortran wrappers of the ATLAS are broken, I am not sure it is such a great news :) David From meine at informatik.uni-hamburg.de Fri Nov 7 07:11:18 2008 From: meine at informatik.uni-hamburg.de (Hans Meine) Date: Fri, 7 Nov 2008 13:11:18 +0100 Subject: [Numpy-discussion] Masked arrays and pickle/unpickle In-Reply-To: <7EFBEC7FA86C1141B59B59EEAEE3294F5A3A12@EMAIL2.exchange.electric.net> References: <7EFBEC7FA86C1141B59B59EEAEE3294F5A37BB@EMAIL2.exchange.electric.net> <9457e7c80807170954q49247e5cpd665db3c5d0b9a5c@mail.gmail.com> <7EFBEC7FA86C1141B59B59EEAEE3294F5A3A12@EMAIL2.exchange.electric.net> Message-ID: <200811071311.18943.meine@informatik.uni-hamburg.de> On Thursday 17 July 2008 19:41:51 Anthony Floyd wrote: > > > What I need to know is how I can trick pickle or Numpy to > > > > put the old class into the new class. > > > > If you have an example data-file, send it to me off-list and I'll > > figure out what to do. Maybe it is as simple as > > > > np.core.ma = np.oldnumeric.ma > > Yes, pretty much. We've put ma.py into numpy.core where ma.py is > nothing more than: > > import numpy.oldnumeric.ma as ma > > class MaskedArray(ma.MaskedArray): > pass > > It works, but becomes a bit of a headache because we now have to > maintain our own numpy package so that all the developers get these > three lines when they install numpy. Did you try changing the pickled data? IIRC you could simply search&replace, since the class name is at the beginning of the representation in clear text. (I see that this a hack, too but it saves you from having to maintain your own numpy.) HTH, Hans From nadavh at visionsense.com Fri Nov 7 07:46:09 2008 From: nadavh at visionsense.com (Nadav Horesh) Date: Fri, 7 Nov 2008 14:46:09 +0200 Subject: [Numpy-discussion] reshape References: <710F2847B0018641891D9A216027636029C2F8@ex3.envision.co.il><710F2847B0018641891D9A216027636029C2FA@ex3.envision.co.il><710F2847B0018641891D9A216027636029C2FB@ex3.envision.co.il> Message-ID: <710F2847B0018641891D9A216027636029C2FE@ex3.envision.co.il> I can not figure out the format specifications, but the following function might be a good starting point: def lst_to_file(lst, filename, fmt='%08d'): ''' lst: list of arrays to write filename: output file name fmt: The format of single item ''' str_lst = [] for l in lst: format = len(l)*fmt + '\n' str_lst.append(format % tuple(l)) open(filename, 'w').write(''.join(str_lst)) -----????? ??????----- ???: numpy-discussion-bounces at scipy.org ??? Nils Wagner ????: ? 06-??????-08 22:07 ??: Discussion of Numerical Python ????: Re: [Numpy-discussion] reshape On Thu, 6 Nov 2008 21:50:52 +0200 "Nadav Horesh" wrote: > A correction: > > lines_list = [cc[i:i+8] for i in range(1, len(cc), 8)] > > Nadav Hi Nadav, Thank you very much. My next question; How can I save lines_list to a file with the following so called small field format (NASTRAN). Each row consists of ten fields of eight characters each. Field 1 should be empty in my application. Field 10 is used only for optional continuation information when applicable. 12345678123456781234567812345678123456781234567812345678123456781234567812345678 xxxxxxxxyyyyyyyyzzzzzzzz........ Cheers, Nils _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3607 bytes Desc: not available URL: From zachary.pincus at yale.edu Fri Nov 7 08:33:31 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Fri, 7 Nov 2008 08:33:31 -0500 Subject: [Numpy-discussion] import 16-bit tiff - byte-order problem? In-Reply-To: References: <3d375d730811061958j4069cbb0h20e12f02e48fc860@mail.gmail.com> Message-ID: <5DB21B61-F766-4AC5-94C5-37AEEF481102@yale.edu> Hi, The PIL has some fundamental architectural problems that prevent it from dealing easily with 16-bit TIFFs, which are exacerbated on little- endian platforms. Add to this a thin sheen of various byte-order bugs and other problems in the __array_interface__, and it's really hard to get consistent loading of 16-bit tiffs across platforms. (I and many others have submitted patches for these issues to no avail.) A while ago, I tried to see if I could graft the image file format parsers from the PIL onto a byte-loading backend that used numpy. Unfortunately, I really couldn't -- the parsers assume too much about the problematic architecture of the PIL. I do have a private fork of the PIL that I made which remedies the bugs and above-mentioned architectural issuse, and works cross- platform, and with any endian system. (It's very restricted compared to the regular PIL -- basically it just does image IO and then converts to numpy arrays.) I haven't released this because I don't really want to make trouble -- and we're promised that a major revision of the PIL is in the offing which will fix these troubles -- but I'm happy to send the code out to those who actually need reliable 16-bit image IO. Zach From philbinj at gmail.com Fri Nov 7 09:15:48 2008 From: philbinj at gmail.com (James Philbin) Date: Fri, 7 Nov 2008 14:15:48 +0000 Subject: [Numpy-discussion] Nasty bug with recarray and cPickle In-Reply-To: <2b1c8c4f0811060653u6dd1cc00t6431a97f1d3e0ee0@mail.gmail.com> References: <2b1c8c4f0811060653u6dd1cc00t6431a97f1d3e0ee0@mail.gmail.com> Message-ID: <2b1c8c4f0811070615y3c98b32am83768c2c814bdb12@mail.gmail.com> Anyone? James On Thu, Nov 6, 2008 at 2:53 PM, James Philbin wrote: > Hi, > > I might be doing something stupid so I thought i'd check here before > filing a bug report. > Firstly: > In [8]: np.__version__ > Out[8]: '1.3.0.dev5883' > > Basically, pickling an element from a recarray seems to break silently: > In [1]: import numpy as np > > In [2]: dtype = [('r','f4'),('g','f4'),('b','f4')] > > In [3]: arr = np.ones((10,), dtype=dtype) > > In [4]: arr > Out[4]: > array([(1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), > (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), > (1.0, 1.0, 1.0), (1.0, 1.0, 1.0)], > dtype=[('r', ' > In [5]: arr[0] > Out[5]: (1.0, 1.0, 1.0) > > In [6]: import cPickle; cPickle.loads(cPickle.dumps(arr[0])) > Out[6]: (0.0, 0.0, 1.8643547392640242e-38) > > > Thanks, > James > From bsouthey at gmail.com Fri Nov 7 09:48:38 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Fri, 07 Nov 2008 08:48:38 -0600 Subject: [Numpy-discussion] Nasty bug with recarray and cPickle In-Reply-To: <2b1c8c4f0811070615y3c98b32am83768c2c814bdb12@mail.gmail.com> References: <2b1c8c4f0811060653u6dd1cc00t6431a97f1d3e0ee0@mail.gmail.com> <2b1c8c4f0811070615y3c98b32am83768c2c814bdb12@mail.gmail.com> Message-ID: <49145546.4000605@gmail.com> James Philbin wrote: > Anyone? > > James > > On Thu, Nov 6, 2008 at 2:53 PM, James Philbin wrote: > >> Hi, >> >> I might be doing something stupid so I thought i'd check here before >> filing a bug report. >> Firstly: >> In [8]: np.__version__ >> Out[8]: '1.3.0.dev5883' >> >> Basically, pickling an element from a recarray seems to break silently: >> In [1]: import numpy as np >> >> In [2]: dtype = [('r','f4'),('g','f4'),('b','f4')] >> >> In [3]: arr = np.ones((10,), dtype=dtype) >> >> In [4]: arr >> Out[4]: >> array([(1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), >> (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), >> (1.0, 1.0, 1.0), (1.0, 1.0, 1.0)], >> dtype=[('r', '> >> In [5]: arr[0] >> Out[5]: (1.0, 1.0, 1.0) >> >> In [6]: import cPickle; cPickle.loads(cPickle.dumps(arr[0])) >> Out[6]: (0.0, 0.0, 1.8643547392640242e-38) >> >> >> Thanks, >> James >> >> > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > I also get the same on my 64-bit linux Fedora rawhide with 1) Python 2.5.2 and '1.2.0' 2) Python 2.4.5 and numpy '1.1.1' 3) Python 2.6 and numpy '1.3.0.dev5935' Further, cPickle.dumps(arr) looks fine as does using cPickle.dumps(arr[0].tolist()). Bruce Using numpy 1.2.0 >>> dtype = [('r','f4'),('g','f4'),('b','f4')] >>> dtype [('r', 'f4'), ('g', 'f4'), ('b', 'f4')] >>> arr = np.ones((10,), dtype=dtype) >>> arr array([(1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0)], dtype=[('r', '>> arr[0] (1.0, 1.0, 1.0) >>> import cPickle; cPickle.loads(cPickle.dumps(arr[0])) (1.704050679100351e-37, 0.0, 1.704050679100351e-37) >>> import cPickle; cPickle.loads(cPickle.dumps(arr)) array([(1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0), (1.0, 1.0, 1.0)], dtype=[('r', '>> import cPickle; cPickle.loads(cPickle.dumps(arr[0].tolist())) (1.0, 1.0, 1.0) From philbinj at gmail.com Fri Nov 7 12:03:45 2008 From: philbinj at gmail.com (James Philbin) Date: Fri, 7 Nov 2008 17:03:45 +0000 Subject: [Numpy-discussion] Nasty bug with recarray and cPickle In-Reply-To: <49145546.4000605@gmail.com> References: <2b1c8c4f0811060653u6dd1cc00t6431a97f1d3e0ee0@mail.gmail.com> <2b1c8c4f0811070615y3c98b32am83768c2c814bdb12@mail.gmail.com> <49145546.4000605@gmail.com> Message-ID: <2b1c8c4f0811070903n65e466e3h3b8992a02b93826f@mail.gmail.com> > I also get the same on my 64-bit linux Fedora rawhide with > ... Thanks, I've submitted this as ticket #952. James From wojciechowski_m at o2.pl Fri Nov 7 14:34:40 2008 From: wojciechowski_m at o2.pl (Marek Wojciechowski) Date: Fri, 7 Nov 2008 20:34:40 +0100 Subject: [Numpy-discussion] f2py error with ifort Message-ID: <200811072034.41010.wojciechowski_m@o2.pl> Hi! I'm trying to compile fortran code with f2py using the --fcompiler=intel flag but the follwoing weird error occurs: Found executable /opt/intel/fce/10.0.026/bin/ifort warning: build_ext: f77_compiler=intel is not available. building '_beameb' extension error: extension '_beameb' has Fortran sources but no Fortran compiler found This is in numpy 1.2.0. In previous verisions the compilation worked fine. What happened? Greetings, Marek -- Marek Wojciechowski From robert.kern at gmail.com Fri Nov 7 14:38:13 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 7 Nov 2008 13:38:13 -0600 Subject: [Numpy-discussion] f2py error with ifort In-Reply-To: <200811072034.41010.wojciechowski_m@o2.pl> References: <200811072034.41010.wojciechowski_m@o2.pl> Message-ID: <3d375d730811071138r2b3e281p3fa18052d2ca883d@mail.gmail.com> On Fri, Nov 7, 2008 at 13:34, Marek Wojciechowski wrote: > Hi! > I'm trying to compile fortran code with f2py using the --fcompiler=intel flag > but the follwoing weird error occurs: > > Found executable /opt/intel/fce/10.0.026/bin/ifort > warning: build_ext: f77_compiler=intel is not available. > building '_beameb' extension > error: extension '_beameb' has Fortran sources but no Fortran compiler found > > This is in numpy 1.2.0. In previous verisions the compilation worked fine. > What happened? Are you on a 64-bit platform? The /opt/intel/fce/ directory suggests that you are. You need to use --fcompiler=intelem instead. You can see the available Fortran compilers with $ python setup.py config_fc --help-fcompiler -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From w.kejia at gmail.com Fri Nov 7 15:47:36 2008 From: w.kejia at gmail.com (Wu, Kejia) Date: Fri, 07 Nov 2008 12:47:36 -0800 Subject: [Numpy-discussion] About Random Number Generation In-Reply-To: <3d375d730810311049s350c010fk30572cc93636ca8a@mail.gmail.com> References: <1225473641.7737.1.camel@localhost> <3d375d730810311049s350c010fk30572cc93636ca8a@mail.gmail.com> Message-ID: <1226090856.5176.0.camel@localhost> Hi all, Thank you for your replies. Thanks On Fri, 2008-10-31 at 12:49 -0500, Robert Kern wrote: > On Fri, Oct 31, 2008 at 12:20, Wu, Kejia wrote: > > Hi all, > > > > I tried the example code here: > > http://numpy.scipy.org/numpydoc/numpy-20.html#71863 > > But failed: > > -------------------------------------- > > rng.py, line 5, in > > import RNG > > ImportError: No module named RNG > > -------------------------------------- > > > > Any suggestion? Thanks at first. > > Despite the confusing URL, that is actually documentation for Numeric, > numpy's predecessor. You can see documentation for the current version > of numpy here: > > http://docs.scipy.org/doc/ > > > Also, can any body tell me whether the random number algorithm in RNG > > package is a pseudorandom one or a real-random one? > > Pseudorandom. The Mersenne Twister, to be precise. > > > And is there an > > available implementation for Monte Carlo method in NumPy? > > "Monte Carlo" is more a general description than a specification of a > particular algorithm. There are many such methods. Which one are you > thinking of? > From fperez.net at gmail.com Fri Nov 7 23:08:33 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 7 Nov 2008 20:08:33 -0800 Subject: [Numpy-discussion] Feedback on PEP 225 sent to python-dev Message-ID: Hi folks, I sent the collected feedback on this issue to python-dev: http://mail.python.org/pipermail/python-dev/2008-November/083493.html If you are interested, at this point please follow up any further discussion directly on python-dev. I'll do my best to answer any questions there, but I'd greatly appreciate if those who had an opinion on the matter could keep an eye on the python-dev discussion (if there is one) and pitch in as well. Regards, f From wojciechowski_m at o2.pl Sat Nov 8 15:55:01 2008 From: wojciechowski_m at o2.pl (Marek Wojciechowski) Date: Sat, 8 Nov 2008 21:55:01 +0100 Subject: [Numpy-discussion] f2py error with ifort Message-ID: <200811082155.01148.wojciechowski_m@o2.pl> Dnia sobota 08 listopad 2008, numpy-discussion-request at scipy.org napisa?: > > Hi! > > I'm trying to compile fortran code with f2py using the --fcompiler=intel > > flag but the follwoing weird error occurs: > > > > Found executable /opt/intel/fce/10.0.026/bin/ifort > > warning: build_ext: f77_compiler=intel is not available. > > building '_beameb' extension > > error: extension '_beameb' has Fortran sources but no Fortran compiler > > found > > > > This is in numpy 1.2.0. In previous verisions the compilation worked > > fine. What happened? > > Are you on a 64-bit platform? The /opt/intel/fce/ directory suggests > that you are. You need to use --fcompiler=intelem instead. You can see > the available Fortran compilers with > > ? $ python setup.py config_fc --help-fcompiler Yes, you're right, i'm on 64-bit platform. Setting --fcompiler=intelem solved the problem. Thanks! -- Marek Wojciechowski From dmitrey.kroshko at scipy.org Sat Nov 8 17:35:20 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sun, 09 Nov 2008 00:35:20 +0200 Subject: [Numpy-discussion] n-dimensional array indexing question Message-ID: <49161428.6000900@scipy.org> hi all, I have array A, A.ndim = n, and 1-dimensional array B of length n. How can I get element of A with coords B[0],...,B[n-1]? i.e. A[B[0], B[1], ..., B[n-1]) A, B, n are not known till execution time, and can have unpredictable lengths (still n is usually small, no more than 4-5). I have tried via ix_ but haven't succeeded yet. Thx in advance, D. From wnbell at gmail.com Sat Nov 8 18:44:23 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 8 Nov 2008 18:44:23 -0500 Subject: [Numpy-discussion] n-dimensional array indexing question In-Reply-To: <49161428.6000900@scipy.org> References: <49161428.6000900@scipy.org> Message-ID: On Sat, Nov 8, 2008 at 5:35 PM, dmitrey wrote: > hi all, > I have array A, A.ndim = n, and 1-dimensional array B of length n. > How can I get element of A with coords B[0],...,B[n-1]? > i.e. A[B[0], B[1], ..., B[n-1]) > > A, B, n are not known till execution time, and can have unpredictable > lengths (still n is usually small, no more than 4-5). > > I have tried via ix_ but haven't succeeded yet. > There's probably a better way, but tuple(B) works: In [1]: from numpy import * In [2]: A = arange(8).reshape(2,2,2) In [3]: A Out[3]: array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) In [4]: B = array([0,1,0]) In [5]: A[tuple(B)] Out[5]: 2 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From lists.20.chth at xoxy.net Sat Nov 8 19:35:41 2008 From: lists.20.chth at xoxy.net (ctw) Date: Sat, 8 Nov 2008 19:35:41 -0500 Subject: [Numpy-discussion] tried to set void-array with object members using buffer error message Message-ID: Hi! Can someone here shed some light on this behavior: > tst = np.zeros(2,[('a',np.int32),('b','S04')]) > np.random.shuffle(tst) # this works fine > tst2 = np.zeros(2,[('a',np.int32),('b',list)]) > np.random.shuffle(tst2) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /home/ctw/ in () /home/ctw/mtrand.pyx in mtrand.RandomState.shuffle() ValueError: tried to set void-array with object members using buffer. I get the same error if I use np.ndarray instead of list. Also, when I put in a class I locally define I get a "data type not understood" error: > class tstcls: > pass > tst4 = np.zeros(2,[('a',np.int32),('b',tstcls)]) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/ctw/ in () TypeError: data type not understood From robert.kern at gmail.com Sat Nov 8 23:14:05 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 8 Nov 2008 22:14:05 -0600 Subject: [Numpy-discussion] tried to set void-array with object members using buffer error message In-Reply-To: References: Message-ID: <3d375d730811082014o76561717j1a752861d39db145@mail.gmail.com> On Sat, Nov 8, 2008 at 18:35, ctw wrote: > Hi! Can someone here shed some light on this behavior: > >> tst = np.zeros(2,[('a',np.int32),('b','S04')]) >> np.random.shuffle(tst) # this works fine > >> tst2 = np.zeros(2,[('a',np.int32),('b',list)]) >> np.random.shuffle(tst2) > --------------------------------------------------------------------------- > ValueError Traceback (most recent call last) > > /home/ctw/ in () > > /home/ctw/mtrand.pyx in mtrand.RandomState.shuffle() > > ValueError: tried to set void-array with object members using buffer. Hmm. Odd. Looks like you can't set an element of a structured array with a structured scalar if one of the fields is an object field. Seems like a bug. In [11]: from numpy import * In [12]: a = zeros(2, [('a', float), ('b', object)]) In [13]: a Out[13]: array([(0.0, 0), (0.0, 0)], dtype=[('a', ' in () ValueError: tried to set void-array with object members using buffer. > I get the same error if I use np.ndarray instead of list. Also, when I > put in a class I locally define I get a "data type not understood" > error: Yeah, don't do that. Always use 'object'. numpy does no type-checking for object arrays/fields. It should probably give you an error for 'list', too. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Sun Nov 9 01:55:56 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 09 Nov 2008 15:55:56 +0900 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 Message-ID: <4916897C.7090901@ar.media.kyoto-u.ac.jp> Hi, I took a quick look at two bugs in scipy.spatial and scipy.special, linked to cython and 64 bits on python 2.4: http://scipy.org/scipy/scipy/ticket/785 At first, I was confused by the (runtime) error message given by cython; Py_ssize_t is a feature added in 2.5. The definition when built against python 2.4 is coming from numpy, which defines Py_ssize_t in numpy/ndarrayobject, and defines it to an int in that case (as recommended in PEP 353). Is this a cython limitation (in which case I will move this discussion to cython-dev), or is there a solution to support 64 bits python 2.4 in numpy/scipy ? cheers, David From charlesr.harris at gmail.com Sun Nov 9 12:40:56 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 9 Nov 2008 10:40:56 -0700 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: <4916897C.7090901@ar.media.kyoto-u.ac.jp> References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> Message-ID: On Sat, Nov 8, 2008 at 11:55 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Hi, > > I took a quick look at two bugs in scipy.spatial and scipy.special, > linked to cython and 64 bits on python 2.4: > > http://scipy.org/scipy/scipy/ticket/785 > > At first, I was confused by the (runtime) error message given by cython; > Py_ssize_t is a feature added in 2.5. The definition when built against > python 2.4 is coming from numpy, which defines Py_ssize_t in > numpy/ndarrayobject, and defines it to an int in that case (as > recommended in PEP 353). > Let me see if I understand this correctly. For Python < 2.5 the list indices and such are ints, while for later versions they are Py_ssize_t, which is larger on 64 bit systems. Meanwhile, Py_intptr_t is large enough to hold a pointer. So why are these two numbers being mixed? They aren't expected to have the same size for earlier Python. The quick fix is to just use ints with a fixme note ;) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sun Nov 9 13:44:30 2008 From: cournape at gmail.com (David Cournapeau) Date: Mon, 10 Nov 2008 03:44:30 +0900 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> On Mon, Nov 10, 2008 at 2:40 AM, Charles R Harris wrote: > > > Let me see if I understand this correctly. For Python < 2.5 the list indices > and such are ints, while for later versions they are Py_ssize_t, which is > larger on 64 bit systems. Meanwhile, Py_intptr_t is large enough to hold a > pointer. yes > So why are these two numbers being mixed? It is note that they are being mixed, but that cython does not support this configuration: it has a internal check which raise an exception in such a case. See around line 55: http://hg.cython.org/cython/file/764f1578df40/Cython/Includes/numpy.pxd As I understand, this means you can't use cython for such a configuration, but I just wanted to confirm whether there were known workarounds. cheers, David From charlesr.harris at gmail.com Sun Nov 9 15:01:05 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 9 Nov 2008 13:01:05 -0700 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> Message-ID: On Sun, Nov 9, 2008 at 11:44 AM, David Cournapeau wrote: > On Mon, Nov 10, 2008 at 2:40 AM, Charles R Harris > wrote: > > > > > > Let me see if I understand this correctly. For Python < 2.5 the list > indices > > and such are ints, while for later versions they are Py_ssize_t, which is > > larger on 64 bit systems. Meanwhile, Py_intptr_t is large enough to hold > a > > pointer. > > yes > > > So why are these two numbers being mixed? > > It is note that they are being mixed, but that cython does not support > this configuration: it has a internal check which raise an exception > in such a case. See around line 55: > > http://hg.cython.org/cython/file/764f1578df40/Cython/Includes/numpy.pxd > > As I understand, this means you can't use cython for such a > configuration, but I just wanted to confirm whether there were known > workarounds. > Lessee, cdef extern from "Python.h": ctypedef int Py_intptr_t cdef extern from "numpy/arrayobject.h": ctypedef Py_intptr_t npy_intp So they are screwing with the npy_intp type. They should hang. Numpy is numpy, Python is python, and never the two should meet. Note that none of this crap is in the c_numpy.pxd included with numpy, BTW. I'd send the cython folks a note and tell them to knock it off, the Py_* values are irrelevant to numpy. In any case, for Python < 2.5, this should be something like cdef extern from "Python.h": ctypedef int Py_ssize_t cdef extern from "numpy/arrayobject.h": ctypedef npy_intp Py_intptr_t But for Python >= 2.5 this can be a problem because cython doesn't actually read the include files and there will be a conflict with Py_ssize_t. There needs to be an #ifndef somewhere and it probably needs to be in a *.h file. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Nov 9 15:16:42 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 9 Nov 2008 13:16:42 -0700 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> Message-ID: On Sun, Nov 9, 2008 at 1:01 PM, Charles R Harris wrote: > > > On Sun, Nov 9, 2008 at 11:44 AM, David Cournapeau wrote: > >> On Mon, Nov 10, 2008 at 2:40 AM, Charles R Harris >> wrote: >> > >> > >> > Let me see if I understand this correctly. For Python < 2.5 the list >> indices >> > and such are ints, while for later versions they are Py_ssize_t, which >> is >> > larger on 64 bit systems. Meanwhile, Py_intptr_t is large enough to hold >> a >> > pointer. >> >> yes >> >> > So why are these two numbers being mixed? >> >> It is note that they are being mixed, but that cython does not support >> this configuration: it has a internal check which raise an exception >> in such a case. See around line 55: >> >> http://hg.cython.org/cython/file/764f1578df40/Cython/Includes/numpy.pxd >> >> As I understand, this means you can't use cython for such a >> configuration, but I just wanted to confirm whether there were known >> workarounds. >> > > Lessee, > > cdef extern from "Python.h": > ctypedef int Py_intptr_t > > cdef extern from "numpy/arrayobject.h": > ctypedef Py_intptr_t npy_intp > > So they are screwing with the npy_intp type. They should hang. Numpy is > numpy, Python is python, and never the two should meet. Note that none of > this crap is in the c_numpy.pxd included with numpy, BTW. I'd send the > cython folks a note and tell them to knock it off, the Py_* values are > irrelevant to numpy. > > In any case, for Python < 2.5, this should be something like > > cdef extern from "Python.h": > ctypedef int Py_ssize_t > Actually, I think > > cdef extern from "numpy/arrayobject.h": > ctypedef npy_intp Py_intptr_t > should be deleted. Although the types are probably equal, one is a numpy thing and the other a Python thing. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Nov 9 16:17:17 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 9 Nov 2008 15:17:17 -0600 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> Message-ID: <3d375d730811091317y6f26bef1ubfcb7c01b78c2959@mail.gmail.com> On Sun, Nov 9, 2008 at 14:01, Charles R Harris wrote: > > On Sun, Nov 9, 2008 at 11:44 AM, David Cournapeau > wrote: >> >> On Mon, Nov 10, 2008 at 2:40 AM, Charles R Harris >> wrote: >> > >> > >> > Let me see if I understand this correctly. For Python < 2.5 the list >> > indices >> > and such are ints, while for later versions they are Py_ssize_t, which >> > is >> > larger on 64 bit systems. Meanwhile, Py_intptr_t is large enough to hold >> > a >> > pointer. >> >> yes >> >> > So why are these two numbers being mixed? >> >> It is note that they are being mixed, but that cython does not support >> this configuration: it has a internal check which raise an exception >> in such a case. See around line 55: >> >> http://hg.cython.org/cython/file/764f1578df40/Cython/Includes/numpy.pxd >> >> As I understand, this means you can't use cython for such a >> configuration, but I just wanted to confirm whether there were known >> workarounds. > > Lessee, > > cdef extern from "Python.h": > ctypedef int Py_intptr_t > > cdef extern from "numpy/arrayobject.h": > ctypedef Py_intptr_t npy_intp > So they are screwing with the npy_intp type. They should hang. Chuck, you're being rude. This would be somewhat mitigated (at least from my perspective) if you knew what you are talking about. However, it appears that you don't. For integer types, the exact ctypedefs don't actually matter. The same code would probably be generated if you used "short", even. All these do is tell Cython to convert them to/from a Python int if they cross the Python boundary. > Numpy is > numpy, Python is python, and never the two should meet. Note that none of > this crap is in the c_numpy.pxd included with numpy, BTW. I'd send the > cython folks a note and tell them to knock it off, the Py_* values are > irrelevant to numpy. And this is entirely irrelevant to the problem at hand. David, this exception is explicitly raised in __getbuffer__() defined in numpy.pxd. It is a Cython limitation. Cython's buffer support cannot be used with numpy in Python < 2.5 currently. Ask Dag about it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dagss at student.matnat.uio.no Sun Nov 9 18:01:16 2008 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Mon, 10 Nov 2008 00:01:16 +0100 Subject: [Numpy-discussion] Cython 0.10 released, more NumPy features Message-ID: <49176BBC.8020103@student.matnat.uio.no> As the latest Cython 0.10 release has some important news for Cython/NumPy users, I decided to post an announcement (hope that's ok). Download at What's new for NumPy users: 1) Support for access to complex float buffers. Cython does not have native complex float syntax support [1], so this is by using structs: cdef struct cdouble: np.double_t real np.double_t imag def f(): cdef np.ndarray[cdouble, ndim=2] arr = np.zeros((3,3), np.cdouble) arr[0,1].real = 3 2) Also support for record arrays: cdef MyStruct: np.int32_t one np.int8_t two np.int8_t three def f(): cdef MyStruct rec cdef np.ndarray[MyStruct] arr = np.zeros((3,), np.dtype('i4,i1,i1')) rec.one, rec.two, rec.three = range(1,4) arr[0] = rec print arr[0].two # prints 2 There's some restrictions though -- in general the data buffer must be directly cast-able to the given struct type; there's no automatic endian-conversion or anything like that. 3) Support for contiguous access, which makes for quicker array access along the last/first dimension if you know you have contiguous data: cdef np.ndarray[int, mode="c"] arr = ... I am likely not going to work to add features beyond this, at least this time around (but there might be minor tweaks and fixes). If anybody would like to use this but there's still showstoppers I'm unaware of then please let me know. [1] It has been discussed and native complex support would be added to Cython if someone found time to implement it, but I can't, unfortunately. -- Dag Sverre From charlesr.harris at gmail.com Sun Nov 9 18:20:52 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 9 Nov 2008 16:20:52 -0700 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: <3d375d730811091317y6f26bef1ubfcb7c01b78c2959@mail.gmail.com> References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> <3d375d730811091317y6f26bef1ubfcb7c01b78c2959@mail.gmail.com> Message-ID: On Sun, Nov 9, 2008 at 2:17 PM, Robert Kern wrote: > On Sun, Nov 9, 2008 at 14:01, Charles R Harris > wrote: > > > > On Sun, Nov 9, 2008 at 11:44 AM, David Cournapeau > > wrote: > >> > >> On Mon, Nov 10, 2008 at 2:40 AM, Charles R Harris > >> wrote: > >> > > >> > > >> > Let me see if I understand this correctly. For Python < 2.5 the list > >> > indices > >> > and such are ints, while for later versions they are Py_ssize_t, which > >> > is > >> > larger on 64 bit systems. Meanwhile, Py_intptr_t is large enough to > hold > >> > a > >> > pointer. > >> > >> yes > >> > >> > So why are these two numbers being mixed? > >> > >> It is note that they are being mixed, but that cython does not support > >> this configuration: it has a internal check which raise an exception > >> in such a case. See around line 55: > >> > >> http://hg.cython.org/cython/file/764f1578df40/Cython/Includes/numpy.pxd > >> > >> As I understand, this means you can't use cython for such a > >> configuration, but I just wanted to confirm whether there were known > >> workarounds. > > > > Lessee, > > > > cdef extern from "Python.h": > > ctypedef int Py_intptr_t > > > > cdef extern from "numpy/arrayobject.h": > > ctypedef Py_intptr_t npy_intp > > So they are screwing with the npy_intp type. They should hang. > > Chuck, you're being rude. This would be somewhat mitigated (at least > from my perspective) if you knew what you are talking about. However, > it appears that you don't. > > For integer types, the exact ctypedefs don't actually matter. The same > code would probably be generated if you used "short", even. All these > do is tell Cython to convert them to/from a Python int if they cross > the Python boundary. > Crossing the boundary is one of the main reasons to use Cython, especially for converting arguments in calling C functions. Numpy has the same problem and, IIRC, gets around it by using PyObject_AsLong and casting the result when needed. It isn't perfect, but there you go. To get cython to do this, you can do: ctypedef long npy_intp And then def hello(m) : cdef npy_intp m_ = m .... Does the right thing. But mixing python types and numpy types is not a good idea, they exist separately and apply to different software. This may be different for the buffer interface, which is likely to cross the boundary, but that is an argument for being very careful on how the buffer interface is dealt with in cython. > > Numpy is > > numpy, Python is python, and never the two should meet. Note that none of > > this crap is in the c_numpy.pxd included with numpy, BTW. I'd send the > > cython folks a note and tell them to knock it off, the Py_* values are > > irrelevant to numpy. > > And this is entirely irrelevant to the problem at hand. > Perhaps the immediate problem, but it is introducing an ugly dependency if they aren't kept separate. > David, this exception is explicitly raised in __getbuffer__() defined > in numpy.pxd. It is a Cython limitation. Cython's buffer support > cannot be used with numpy in Python < 2.5 currently. Ask Dag about it. > Yes, David referenced that line in numpy.pxd. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dagss at student.matnat.uio.no Sun Nov 9 18:29:52 2008 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Mon, 10 Nov 2008 00:29:52 +0100 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> Message-ID: <49177270.5020405@student.matnat.uio.no> Charles R Harris wrote: > > > On Sun, Nov 9, 2008 at 11:44 AM, David Cournapeau > wrote: > > On Mon, Nov 10, 2008 at 2:40 AM, Charles R Harris > > wrote: > > > > > > Let me see if I understand this correctly. For Python < 2.5 the > list indices > > and such are ints, while for later versions they are Py_ssize_t, > which is > > larger on 64 bit systems. Meanwhile, Py_intptr_t is large enough > to hold a > > pointer. > > yes > > > So why are these two numbers being mixed? > > It is note that they are being mixed, but that cython does not support > this configuration: it has a internal check which raise an exception > in such a case. See around line 55: > > http://hg.cython.org/cython/file/764f1578df40/Cython/Includes/numpy.pxd > > As I understand, this means you can't use cython for such a > configuration, but I just wanted to confirm whether there were known > workarounds. > > > Lessee, > > cdef extern from "Python.h": > ctypedef int Py_intptr_t > > cdef extern from "numpy/arrayobject.h": > ctypedef Py_intptr_t npy_intp > > So they are screwing with the npy_intp type. They should hang. Numpy is > numpy, Python is python, and never the two should meet. Note that none > of this crap is in the c_numpy.pxd included with numpy, BTW. I'd send > the cython folks a note and tell them to knock it off, the Py_* values > are irrelevant to numpy. I do not want to hang... Robert is right, it could just as well say "ctypedef int npy_intp". Perhaps it should (but it would not fix the problem). I didn't think too much about it, just copied the definition I found in the particular NumPy headers on my drive, knowing it wouldn't make a difference. Some comments on the real problem: What the Cython numpy.pxd file does is implementing PEP 3118 [1], which is supported by Cython in all Python versions (ie backported, not following any standard). And, in Py_buffer, the strides and shapes are Py_ssize_t* (which is also backported as David mentions). So, in order to cast the shape and stride arrays and return them in the Py_buffer struct, they need to have the datatype defined by the backported PEP 3118, i.e. the backported Py_ssize_t, i.e. int. At the time I didn't know whether this case every arose in practice, so that's why this is not supported (I have limited knowledge about the C side of NumPy). The fix is easy: a) Rather than raise an exception on line 56, one can instead create new arrays (using malloc) and copy the contents of shape and strides to arrays with elements of the right size. b) These must then be freed in __releasebuffer__ (under the same conditions). c) Also "info.obj" must then be set to "self". Note that it is set to None on line 91, that line should then be moved. OTOH, one could also opt for changing how PEP 3118 is backported and say that "for Python 2.4 we say that Py_buffer has Py_intptr_t* fields instead". This would be more work to get exactly right, and would be more contrived as well, but is doable if one really wants to get rid of the extra mallocs. I'd love it if someone else could take a look and submit a patch for either of these approaches, as I would have to set up the right combinations of libraries to test things properly etc. and just plain don't have the time right now. [1] http://www.python.org/dev/peps/pep-3118/ -- Dag Sverre From dagss at student.matnat.uio.no Sun Nov 9 18:37:43 2008 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Mon, 10 Nov 2008 00:37:43 +0100 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> <3d375d730811091317y6f26bef1ubfcb7c01b78c2959@mail.gmail.com> Message-ID: <49177447.6030405@student.matnat.uio.no> Charles R Harris wrote: > But mixing python types and numpy types is not a good idea, they exist > separately and apply to different software. This may be different for > the buffer interface, which is likely to cross the boundary, but that is > an argument for being very careful on how the buffer interface is dealt > with in cython. > Which is why an exception is thrown in this mismatch circumstance -- if we weren't careful, the cast would happen, which would be very bad. -- Dag Sverre From charlesr.harris at gmail.com Sun Nov 9 18:59:34 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 9 Nov 2008 16:59:34 -0700 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: <49177447.6030405@student.matnat.uio.no> References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> <3d375d730811091317y6f26bef1ubfcb7c01b78c2959@mail.gmail.com> <49177447.6030405@student.matnat.uio.no> Message-ID: On Sun, Nov 9, 2008 at 4:37 PM, Dag Sverre Seljebotn < dagss at student.matnat.uio.no> wrote: > Charles R Harris wrote: > > > But mixing python types and numpy types is not a good idea, they exist > > separately and apply to different software. This may be different for > > the buffer interface, which is likely to cross the boundary, but that is > > an argument for being very careful on how the buffer interface is dealt > > with in cython. > > > > Which is why an exception is thrown in this mismatch circumstance -- if > we weren't careful, the cast would happen, which would be very bad. I think the arguments should be copied and explicitly cast. It's nasty, but crossing boundaries always is. There are some old compatibility functions in numpy that do the same thing. Isn't this a bit conservative? It looks to me like long and double would be good bets for a lot of these. The boolean type, '?', is also missing. 186 187 ctypedef signed int npy_byte 188 ctypedef signed int npy_short 189 ctypedef signed int npy_int 190 ctypedef signed int npy_long 191 ctypedef signed int npy_longlong 192 193 ctypedef unsigned int npy_ubyte 194 ctypedef unsigned int npy_ushort 195 ctypedef unsigned int npy_uint 196 ctypedef unsigned int npy_ulong 197 ctypedef unsigned int npy_ulonglong 198 199 ctypedef float npy_float 200 ctypedef float npy_double 201 ctypedef float npy_longdouble 202 203 ctypedef signed int npy_int8 204 ctypedef signed int npy_int16 205 ctypedef signed int npy_int32 206 ctypedef signed int npy_int64 207 ctypedef signed int npy_int96 208 ctypedef signed int npy_int128 209 210 ctypedef unsigned int npy_uint8 211 ctypedef unsigned int npy_uint16 212 ctypedef unsigned int npy_uint32 213 ctypedef unsigned int npy_uint64 214 ctypedef unsigned int npy_uint96 215 ctypedef unsigned int npy_uint128 216 217 ctypedef float npy_float32 218 ctypedef float npy_float64 219 ctypedef float npy_float80 220 ctypedef float npy_float96 221 ctypedef float npy_float128 Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Nov 9 19:35:41 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 9 Nov 2008 17:35:41 -0700 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: <49177270.5020405@student.matnat.uio.no> References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> <49177270.5020405@student.matnat.uio.no> Message-ID: On Sun, Nov 9, 2008 at 4:29 PM, Dag Sverre Seljebotn < dagss at student.matnat.uio.no> wrote: > Charles R Harris wrote: > > > > > > On Sun, Nov 9, 2008 at 11:44 AM, David Cournapeau > > wrote: > > > > On Mon, Nov 10, 2008 at 2:40 AM, Charles R Harris > > > > wrote: > > > > > > > > > Let me see if I understand this correctly. For Python < 2.5 the > > list indices > > > and such are ints, while for later versions they are Py_ssize_t, > > which is > > > larger on 64 bit systems. Meanwhile, Py_intptr_t is large enough > > to hold a > > > pointer. > > > > yes > > > > > So why are these two numbers being mixed? > > > > It is note that they are being mixed, but that cython does not > support > > this configuration: it has a internal check which raise an exception > > in such a case. See around line 55: > > > > > http://hg.cython.org/cython/file/764f1578df40/Cython/Includes/numpy.pxd > > > > As I understand, this means you can't use cython for such a > > configuration, but I just wanted to confirm whether there were known > > workarounds. > > > > > > Lessee, > > > > cdef extern from "Python.h": > > ctypedef int Py_intptr_t > > > > cdef extern from "numpy/arrayobject.h": > > ctypedef Py_intptr_t npy_intp > > > > So they are screwing with the npy_intp type. They should hang. Numpy is > > numpy, Python is python, and never the two should meet. Note that none > > of this crap is in the c_numpy.pxd included with numpy, BTW. I'd send > > the cython folks a note and tell them to knock it off, the Py_* values > > are irrelevant to numpy. > > I do not want to hang... > > Robert is right, it could just as well say "ctypedef int npy_intp". > Perhaps it should (but it would not fix the problem). I didn't think too > much about it, just copied the definition I found in the particular > NumPy headers on my drive, knowing it wouldn't make a difference. > > Some comments on the real problem: > > What the Cython numpy.pxd file does is implementing PEP 3118 [1], which > is supported by Cython in all Python versions (ie backported, not > following any standard). And, in Py_buffer, the strides and shapes are > Py_ssize_t* (which is also backported as David mentions). So, in order > to cast the shape and stride arrays and return them in the Py_buffer > struct, they need to have the datatype defined by the backported PEP > 3118, i.e. the backported Py_ssize_t, i.e. int. > So the backported version is pretty much a cython standard? > > At the time I didn't know whether this case every arose in practice, so > that's why this is not supported (I have limited knowledge about the C > side of NumPy). The fix is easy: > > a) Rather than raise an exception on line 56, one can instead create new > arrays (using malloc) and copy the contents of shape and strides to > arrays with elements of the right size. > b) These must then be freed in __releasebuffer__ (under the same > conditions). > c) Also "info.obj" must then be set to "self". Note that it is set to > None on line 91, that line should then be moved. > > OTOH, one could also opt for changing how PEP 3118 is backported and say > that "for Python 2.4 we say that Py_buffer has Py_intptr_t* fields > instead". This would be more work to get exactly right, and would be > more contrived as well, but is doable if one really wants to get rid of > the extra mallocs. > This would be the direct way. The check could then be if sizeof(npy_intp) != sizeof(Py_intptr_t). That is more reasonable as they are supposed to serve the same purpose. If numpy is the only user of this interface that is the route I would go. Is there an official description of how PEP 3118 is to be backported? I don't know who else uses it at the moment. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sun Nov 9 19:46:32 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 10 Nov 2008 02:46:32 +0200 Subject: [Numpy-discussion] Generalised ufuncs Message-ID: <9457e7c80811091646t39f76912n22e6925f01e44b85@mail.gmail.com> Hi everyone, I finally merged the generalised ufunc patches that awaited comment as the gen_ufuncs branch. Please review the changes on trunk, and let me know if problems occur. One of the tests fail on FreeBSD 64: ERROR: Failure: ImportError (/tmp/numpy-buildbot/b12/numpy-install/lib/python2.4/site-packages/numpy/core/umath_tests.so: Undefined symbol "cblas_dgemm") Is there any reason why blasdot should work fine, but that this should be broken? Regards St?fan From charlesr.harris at gmail.com Sun Nov 9 20:06:00 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 9 Nov 2008 18:06:00 -0700 Subject: [Numpy-discussion] Generalised ufuncs In-Reply-To: <9457e7c80811091646t39f76912n22e6925f01e44b85@mail.gmail.com> References: <9457e7c80811091646t39f76912n22e6925f01e44b85@mail.gmail.com> Message-ID: On Sun, Nov 9, 2008 at 5:46 PM, St?fan van der Walt wrote: > Hi everyone, > > I finally merged the generalised ufunc patches that awaited comment as > the gen_ufuncs branch. Please review the changes on trunk, and let me > know if problems occur. > PyUFunc_FromFuncAndDataAndSignature needs to be moved to the end of the list. > > One of the tests fail on FreeBSD 64: > > ERROR: Failure: ImportError > > (/tmp/numpy-buildbot/b12/numpy-install/lib/python2.4/site-packages/numpy/core/umath_tests.so: > Undefined symbol "cblas_dgemm") > > Is there any reason why blasdot should work fine, but that this should > be broken? > Hmm..... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From hagberg at lanl.gov Sun Nov 9 21:20:07 2008 From: hagberg at lanl.gov (Aric Hagberg) Date: Sun, 9 Nov 2008 19:20:07 -0700 Subject: [Numpy-discussion] numpy Sphinx extensions, non-ascii encoding? Message-ID: <20081110022007.GV1718@bigjim2.lanl.gov> I like the numpy docstrings format and am exploring using Sphinx with the numpy extensions to generate documentation. So far it's really great. But I have some non-ascii (UTF-8) docstrings that break the autosummary directive code and I can't quite figure out how to fix it. Can someone give me some tips here? The problem seems to be in generating the summary table. Sorry if this is the wrong forum for this... Aric From cournape at gmail.com Sun Nov 9 22:38:30 2008 From: cournape at gmail.com (David Cournapeau) Date: Mon, 10 Nov 2008 12:38:30 +0900 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: <49177270.5020405@student.matnat.uio.no> References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> <49177270.5020405@student.matnat.uio.no> Message-ID: <5b8d13220811091938m59c07b4l49e9db36b3c19bb6@mail.gmail.com> Hi Dag, On Mon, Nov 10, 2008 at 8:29 AM, Dag Sverre Seljebotn wrote: > > Robert is right, it could just as well say "ctypedef int npy_intp". > Perhaps it should (but it would not fix the problem). I didn't think too > much about it, just copied the definition I found in the particular > NumPy headers on my drive, knowing it wouldn't make a difference. > > Some comments on the real problem: > > What the Cython numpy.pxd file does is implementing PEP 3118 [1], which > is supported by Cython in all Python versions (ie backported, not > following any standard). And, in Py_buffer, the strides and shapes are > Py_ssize_t* (which is also backported as David mentions). So, in order > to cast the shape and stride arrays and return them in the Py_buffer > struct, they need to have the datatype defined by the backported PEP > 3118, i.e. the backported Py_ssize_t, i.e. int. > Thanks for those explanation. > a) Rather than raise an exception on line 56, one can instead create new > arrays (using malloc) and copy the contents of shape and strides to > arrays with elements of the right size. > b) These must then be freed in __releasebuffer__ (under the same > conditions). > c) Also "info.obj" must then be set to "self". Note that it is set to > None on line 91, that line should then be moved. Ok, sounds easy enough. > OTOH, one could also opt for changing how PEP 3118 is backported and say > that "for Python 2.4 we say that Py_buffer has Py_intptr_t* fields > instead". This would be more work to get exactly right, and would be > more contrived as well, but is doable if one really wants to get rid of > the extra mallocs. This sounds like the right solution, but OTOH, since I am not familiar with the cython codebase at all, I will prepare a patch following the first solution, and I guess you can always improve it later. thanks, David From charlesr.harris at gmail.com Sun Nov 9 23:23:28 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 9 Nov 2008 21:23:28 -0700 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: <5b8d13220811091938m59c07b4l49e9db36b3c19bb6@mail.gmail.com> References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> <49177270.5020405@student.matnat.uio.no> <5b8d13220811091938m59c07b4l49e9db36b3c19bb6@mail.gmail.com> Message-ID: On Sun, Nov 9, 2008 at 8:38 PM, David Cournapeau wrote: > Hi Dag, > > On Mon, Nov 10, 2008 at 8:29 AM, Dag Sverre Seljebotn > wrote: > > > > > Robert is right, it could just as well say "ctypedef int npy_intp". > > Perhaps it should (but it would not fix the problem). I didn't think too > > much about it, just copied the definition I found in the particular > > NumPy headers on my drive, knowing it wouldn't make a difference. > > > > Some comments on the real problem: > > > > What the Cython numpy.pxd file does is implementing PEP 3118 [1], which > > is supported by Cython in all Python versions (ie backported, not > > following any standard). And, in Py_buffer, the strides and shapes are > > Py_ssize_t* (which is also backported as David mentions). So, in order > > to cast the shape and stride arrays and return them in the Py_buffer > > struct, they need to have the datatype defined by the backported PEP > > 3118, i.e. the backported Py_ssize_t, i.e. int. > > > > Thanks for those explanation. > > > a) Rather than raise an exception on line 56, one can instead create new > > arrays (using malloc) and copy the contents of shape and strides to > > arrays with elements of the right size. > > b) These must then be freed in __releasebuffer__ (under the same > > conditions). > > c) Also "info.obj" must then be set to "self". Note that it is set to > > None on line 91, that line should then be moved. > > Ok, sounds easy enough. > > > OTOH, one could also opt for changing how PEP 3118 is backported and say > > that "for Python 2.4 we say that Py_buffer has Py_intptr_t* fields > > instead". This would be more work to get exactly right, and would be > > more contrived as well, but is doable if one really wants to get rid of > > the extra mallocs. > > This sounds like the right solution, but OTOH, since I am not familiar > with the cython codebase at all, I will prepare a patch following the > first solution, and I guess you can always improve it later. > I think this is the way to go. However, I think the relevant variables in Py_buffer (there are three) should have their own type that is set earlier when Cython does all it's Python version voodoo. This makes it easy to add various exceptions if needed. I can't seem to find that spot, though. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Nov 10 00:15:37 2008 From: cournape at gmail.com (David Cournapeau) Date: Mon, 10 Nov 2008 14:15:37 +0900 Subject: [Numpy-discussion] Generalised ufuncs In-Reply-To: <9457e7c80811091646t39f76912n22e6925f01e44b85@mail.gmail.com> References: <9457e7c80811091646t39f76912n22e6925f01e44b85@mail.gmail.com> Message-ID: <5b8d13220811092115y797e71cdxacc31644d2213ea0@mail.gmail.com> On Mon, Nov 10, 2008 at 9:46 AM, St?fan van der Walt wrote: > Hi everyone, > > I finally merged the generalised ufunc patches that awaited comment as > the gen_ufuncs branch. Please review the changes on trunk, and let me > know if problems occur. > > One of the tests fail on FreeBSD 64: > > ERROR: Failure: ImportError > (/tmp/numpy-buildbot/b12/numpy-install/lib/python2.4/site-packages/numpy/core/umath_tests.so: > Undefined symbol "cblas_dgemm") > > Is there any reason why blasdot should work fine, but that this should > be broken? The obvious question is what is umath_test.so :) We don't generate this file AFAIK, and I certainly cannot see it mentioned anywhere in numpy source code (and neither in my mac os X or linux installation). FreeBSD would also require some minor modifications to numpy.distutils, I believe. I have just noticed a bug which may be related in numpy.distutils on a fresh ubuntu install, where numpy uses the C compiler to link atlas when no fortran compiler is available (it should fail instead, since we use the fortran interface in that case, and that cannot work without some fortran support, obviously). David From tjhnson at gmail.com Mon Nov 10 01:29:20 2008 From: tjhnson at gmail.com (T J) Date: Sun, 9 Nov 2008 22:29:20 -0800 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: <9457e7c80811051341r5ba8b327id881841f196b67ac@mail.gmail.com> Message-ID: On Thu, Nov 6, 2008 at 3:01 PM, T J wrote: > On Thu, Nov 6, 2008 at 2:36 PM, Charles R Harris > wrote: >> I could add exp2, log2, and logaddexp2 pretty easily. Almost too easily, I >> don't want to clutter up numpy with a lot of functions. However, if there is >> a community for these functions I will put them in. >> > > I worry about clutter as well. Note that scipy provides log2 and exp2 > already (scipy.special). So I think only logaddexp2 would be needed > and (eventually) logdotexp2. Maybe scipy.special is a better place > than in numpy? Then perhaps the clutter could be avoided....though > I'm probably not the best one to ask for advice on this. I will > definitely use the functions and I suspect many others will as > well---where ever they are placed. Since no one commented further on this, can we go ahead and add logaddexp2? Once in svn, we can always deal with 'location' later---I just don't want it to get forgotten. From nwagner at iam.uni-stuttgart.de Mon Nov 10 02:42:16 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 10 Nov 2008 08:42:16 +0100 Subject: [Numpy-discussion] numpy-docs and sphinx Message-ID: Hi all, I tried to build the NumPy Reference Guide. svn/numpy-docs > make html mkdir -p build ./ext/autosummary_generate.py source/reference/*.rst \ -p dump.xml -o source/reference/generated Traceback (most recent call last): File "./ext/autosummary_generate.py", line 18, in ? from autosummary import import_by_name File "/data/home/nwagner/svn/numpy-docs/ext/autosummary.py", line 59, in ? import sphinx.addnodes, sphinx.roles, sphinx.builder File "/data/home/nwagner/local/lib/python2.5/site-packages/Sphinx-0.5dev_20081110-py2.5.egg/sphinx/__init__.py", line 70 '-c' not in (opt[0] for opt in opts): ^ SyntaxError: invalid syntax make: *** [build/generate-stamp] Fehler 1 How can I fix that problem ? Nils From matthieu.brucher at gmail.com Mon Nov 10 03:18:48 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 10 Nov 2008 09:18:48 +0100 Subject: [Numpy-discussion] site.cfg and libraries in several folders Message-ID: Hi, I'm still trying to get the MKL to work with Numpy, but I've tried the latest MKL (10.1) and ran into a problem. With the MKL, I have to link against mkl, guide and iomp5. The problem is that the two last libraries are not in the MKL anymore, but only in the compiler folder. I have thus two folders to put in library_dirs. My issue is that python setup.py config searches for mkl, guide and iomp5 in only one folder at a time, when it should look into the two folders in library_dirs. Is tehre something I'm missing ? Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From david at ar.media.kyoto-u.ac.jp Mon Nov 10 04:15:25 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 10 Nov 2008 18:15:25 +0900 Subject: [Numpy-discussion] site.cfg and libraries in several folders In-Reply-To: References: Message-ID: <4917FBAD.4070407@ar.media.kyoto-u.ac.jp> Matthieu Brucher wrote: > Hi, > > I'm still trying to get the MKL to work with Numpy, but I've tried the > latest MKL (10.1) and ran into a problem. > With the MKL, I have to link against mkl, guide and iomp5. The problem > is that the two last libraries are not in the MKL anymore, but only in > the compiler folder. I have thus two folders to put in library_dirs. > My issue is that python setup.py config searches for mkl, guide and > iomp5 in only one folder at a time, when it should look into the two > folders in library_dirs. > You could make some symlinks into one fake dir which you would use only for numpy, as a workardound (if you are on unix, that is). David From matthieu.brucher at gmail.com Mon Nov 10 04:35:46 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 10 Nov 2008 10:35:46 +0100 Subject: [Numpy-discussion] site.cfg and libraries in several folders In-Reply-To: <4917FBAD.4070407@ar.media.kyoto-u.ac.jp> References: <4917FBAD.4070407@ar.media.kyoto-u.ac.jp> Message-ID: 2008/11/10 David Cournapeau : > Matthieu Brucher wrote: >> Hi, >> >> I'm still trying to get the MKL to work with Numpy, but I've tried the >> latest MKL (10.1) and ran into a problem. >> With the MKL, I have to link against mkl, guide and iomp5. The problem >> is that the two last libraries are not in the MKL anymore, but only in >> the compiler folder. I have thus two folders to put in library_dirs. >> My issue is that python setup.py config searches for mkl, guide and >> iomp5 in only one folder at a time, when it should look into the two >> folders in library_dirs. >> > > You could make some symlinks into one fake dir which you would use only > for numpy, as a workardound (if you are on unix, that is). Yes, that is what I did in the MKL folder ;) But I suppose that you should be able to tell numpy that the libraries are in separate folders, shouldn't you? Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From david at ar.media.kyoto-u.ac.jp Mon Nov 10 04:31:51 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 10 Nov 2008 18:31:51 +0900 Subject: [Numpy-discussion] site.cfg and libraries in several folders In-Reply-To: References: <4917FBAD.4070407@ar.media.kyoto-u.ac.jp> Message-ID: <4917FF87.50109@ar.media.kyoto-u.ac.jp> Matthieu Brucher wrote: > > Yes, that is what I did in the MKL folder ;) But I suppose that you > should be able to tell numpy that the libraries are in separate > folders, shouldn't you? Yes, you could :) I have little interest in working on numpy.distutils, but feel free if you want to add this feature, David From dagss at student.matnat.uio.no Mon Nov 10 05:22:47 2008 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Mon, 10 Nov 2008 11:22:47 +0100 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> <3d375d730811091317y6f26bef1ubfcb7c01b78c2959@mail.gmail.com> <49177447.6030405@student.matnat.uio.no> Message-ID: <49180B77.2020109@student.matnat.uio.no> Charles R Harris wrote: > > > On Sun, Nov 9, 2008 at 4:37 PM, Dag Sverre Seljebotn > > wrote: > > Charles R Harris wrote: > > > But mixing python types and numpy types is not a good idea, they > exist > > separately and apply to different software. This may be > different for > > the buffer interface, which is likely to cross the boundary, but > that is > > an argument for being very careful on how the buffer interface > is dealt > > with in cython. > > > > Which is why an exception is thrown in this mismatch circumstance > -- if > we weren't careful, the cast would happen, which would be very bad. > > > I think the arguments should be copied and explicitly cast. It's > nasty, but crossing boundaries always is. There are some old > compatibility functions in numpy that do the same thing. > > Isn't this a bit conservative? It looks to me like long and double > would be good bets for a lot of these. The boolean type, '?', is also > missing. > > 186 > 187 ctypedef signed int npy_byte > 188 ctypedef signed int npy_short > 189 ctypedef signed int npy_int As Robert mentioned, Cython is in many ways type size agnostic -- it simply needs to know that npy_longlong is a signed int for the purposes of coercion etc, but it does not need to know the size. The whole point of this is to as far as possible delegate this to the C compiler. By the time one has to be "conservative" or "optimistic" rather than exact one has already failed in this area IMO. (This is because the ctypedefs are "external", so Cython just uses the name directly for the C code (i.e. "npy_byte" is inserted as a string in the C source). The syntax is unfortunately a bit confusing though; suggestions for improvements welcome.) The one area you do get problems with this is if you try to assign from a npy_byte* to a npy_short*, which Cython will allow. But the C compiler will still raise an error in that case and so it is only an issue of user-friendliness (though it should probably be fixed somehow, perhaps by making all external typedefs size-incompatible from a Cython perspective and require casts, but that must be run by the Cython community). This is done so that one doesn't have to duplicate all the #ifdef-definitions (for which there wouldn't even by any Cython mechanism, though we could add one). Dag Sverre From dagss at student.matnat.uio.no Mon Nov 10 05:32:49 2008 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Mon, 10 Nov 2008 11:32:49 +0100 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> <49177270.5020405@student.matnat.uio.no> Message-ID: <49180DD1.7070208@student.matnat.uio.no> Charles R Harris wrote: > > > On Sun, Nov 9, 2008 at 4:29 PM, Dag Sverre Seljebotn > > wrote: > > > What the Cython numpy.pxd file does is implementing PEP 3118 [1], > which > is supported by Cython in all Python versions (ie backported, not > following any standard). And, in Py_buffer, the strides and shapes are > Py_ssize_t* (which is also backported as David mentions). So, in order > to cast the shape and stride arrays and return them in the Py_buffer > struct, they need to have the datatype defined by the backported PEP > 3118, i.e. the backported Py_ssize_t, i.e. int. > > > So the backported version is pretty much a cython standard? Yes. The whole thing is described in http://wiki.cython.org/enhancements/buffer , under the section "Buffer acquisition and Python versions". Basically the PEP is followed for Python 2.6+, then some tricks are done to make a similar interface work for older Python versions. Note especially that if you NumPy people set Py_TPFLAGS_HAVE_NEWBUFFER and provide a bf_getbuffer implementation following the exact (non-backported) , the __getbuffer__ in numpy.pxd will not come into play on Python 2.6+. > > OTOH, one could also opt for changing how PEP 3118 is backported > and say > that "for Python 2.4 we say that Py_buffer has Py_intptr_t* fields > instead". This would be more work to get exactly right, and would be > more contrived as well, but is doable if one really wants to get > rid of > the extra mallocs. > > > This would be the direct way. The check could then be if > sizeof(npy_intp) != sizeof(Py_intptr_t). That is more reasonable as > they are supposed to serve the same purpose. If numpy is the only user > of this interface that is the route I would go. Is there an official > description of how PEP 3118 is to be backported? I don't know who else > uses it at the moment. NumPy is currently the only user that I know of, and is likely to remain so I think, so making this change is ok. The backport is very Cython-specific because of the Cython-specific calling convention (essentially, objects lack the needed slot in earlier Python versions, and we do not want to pay the price of a dict lookup) and is about as official as things get with Cython (i.e. a wikipage). Dag Sverre From cournape at gmail.com Mon Nov 10 05:38:17 2008 From: cournape at gmail.com (David Cournapeau) Date: Mon, 10 Nov 2008 19:38:17 +0900 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: <5b8d13220811091938m59c07b4l49e9db36b3c19bb6@mail.gmail.com> References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> <49177270.5020405@student.matnat.uio.no> <5b8d13220811091938m59c07b4l49e9db36b3c19bb6@mail.gmail.com> Message-ID: <5b8d13220811100238v1fb48d6eq68a0c7954f3ced8f@mail.gmail.com> On Mon, Nov 10, 2008 at 12:38 PM, David Cournapeau wrote: > > This sounds like the right solution, but OTOH, since I am not familiar > with the cython codebase at all, I will prepare a patch following the > first solution, and I guess you can always improve it later. Ok, I made a patch which makes strides/shape to be Py_ssize_t arrays instead of npy_intp, and create temporary buffers when sizeof(Py_ssize_t) != sizeof(npy_intp_t). I have not tested it thouroughly, but it made the tests which failed work on python 2.4 on RHEL 64 bits. Can I have an account to put the mercurial bundle somewhere ? cheers, David From simon.palmer at gmail.com Mon Nov 10 12:25:31 2008 From: simon.palmer at gmail.com (Simon Palmer) Date: Mon, 10 Nov 2008 17:25:31 +0000 Subject: [Numpy-discussion] numpy array serialization with JSON Message-ID: Does anyone have a recommendation of a library/method for serialization of numpy arrays to and from text (specifically for the purposes of embedding in XML)? I don't want to use pickle or tostring() because my XML has to be consumable across a variety of programming environments. I'm currently using simplejson for other array types (list) but it does not handle numpy arrays. I could go in and out via a list, but that feels a bit resource intensive and wasteful just for serializaton. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Nov 10 12:36:34 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 10 Nov 2008 11:36:34 -0600 Subject: [Numpy-discussion] numpy array serialization with JSON In-Reply-To: References: Message-ID: <3d375d730811100936r2508584oa9848e377f5e5125@mail.gmail.com> On Mon, Nov 10, 2008 at 11:25, Simon Palmer wrote: > Does anyone have a recommendation of a library/method for serialization of > numpy arrays to and from text (specifically for the purposes of embedding in > XML)? I don't want to use pickle or tostring() because my XML has to be > consumable across a variety of programming environments. If you only need to support typical dtypes, just use tostring() (and base64 encoding) and a little bit of XML to provide the shape and dtype information. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Mon Nov 10 12:38:28 2008 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 10 Nov 2008 17:38:28 +0000 (UTC) Subject: [Numpy-discussion] numpy-docs and sphinx References: Message-ID: Hi, Mon, 10 Nov 2008 08:42:16 +0100, Nils Wagner wrote: > I tried to build the NumPy Reference Guide. > > svn/numpy-docs > make html > mkdir -p build > ./ext/autosummary_generate.py source/reference/*.rst \ > -p dump.xml -o source/reference/generated > Traceback (most recent call last): > File "./ext/autosummary_generate.py", line 18, in ? > from autosummary import import_by_name > File > "/data/home/nwagner/svn/numpy-docs/ext/autosummary.py", line 59, in ? > import sphinx.addnodes, sphinx.roles, sphinx.builder > File > "/data/home/nwagner/local/lib/python2.5/site-packages/ Sphinx-0.5dev_20081110-py2.5.egg/sphinx/__init__.py", > line 70 > '-c' not in (opt[0] for opt in opts): > ^ > SyntaxError: invalid syntax > make: *** [build/generate-stamp] Fehler 1 > > How can I fix that problem ? I don't get this error. But I'll try to guess -- the native Python version installed on your machine is 2.3, which does not support generator expressions, which causes a syntax error. I believe you set PYTHONPATH manually, so Python 2.3 programs will try to look modules in your local/ 2.5 site-packages. autosummary_generate.py above probably runs under Python 2.3 -- you should be able to fix this by changing "python" to "python2.5" on the first line of autosummary_generate.py, or editing the makefile so that python2.5 is explicitly used. -- Pauli Virtanen From simon.palmer at gmail.com Mon Nov 10 12:49:53 2008 From: simon.palmer at gmail.com (Simon Palmer) Date: Mon, 10 Nov 2008 17:49:53 +0000 Subject: [Numpy-discussion] numpy array serialization with JSON In-Reply-To: <3d375d730811100936r2508584oa9848e377f5e5125@mail.gmail.com> References: <3d375d730811100936r2508584oa9848e377f5e5125@mail.gmail.com> Message-ID: What, if any, header information from numarray gets put in the bytes by tostring(), especially as I have n dimensions? I am very likely to be deserializing into a Java Array object (or maybe a double[]) and it is not clear to me how I would do that from the bytes in the tostring() representation. On Mon, Nov 10, 2008 at 5:36 PM, Robert Kern wrote: > On Mon, Nov 10, 2008 at 11:25, Simon Palmer > wrote: > > Does anyone have a recommendation of a library/method for serialization > of > > numpy arrays to and from text (specifically for the purposes of embedding > in > > XML)? I don't want to use pickle or tostring() because my XML has to be > > consumable across a variety of programming environments. > > If you only need to support typical dtypes, just use tostring() (and > base64 encoding) and a little bit of XML to provide the shape and > dtype information. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Mon Nov 10 12:58:48 2008 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 10 Nov 2008 09:58:48 -0800 Subject: [Numpy-discussion] numpy array serialization with JSON In-Reply-To: References: <3d375d730811100936r2508584oa9848e377f5e5125@mail.gmail.com> Message-ID: <49187658.9050609@noaa.gov> Simon Palmer wrote: > What, if any, header information from numarray gets put in the bytes by > tostring(), especially as I have n dimensions? none, you'd have to do that separately. > I am very likely to be deserializing into a Java Array object (or maybe > a double[]) and it is not clear to me how I would do that from the bytes > in the tostring() representation. If you're thinking JSON, then I think you'd want text, not binary. Maybe you can make use of the repr()? It wouldn't be hard to just right your own text renderer from scratch, unless you need C speed. Python being python, I'd tend to see what your Java (or Javascript) code can easily consume, then write the python code to generate that. Does JSON have a representation for n-d arrays? In my little work with it, it looked pretty lame for arrays of number, so I'd be surprised. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From matthieu.brucher at gmail.com Mon Nov 10 13:06:06 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 10 Nov 2008 19:06:06 +0100 Subject: [Numpy-discussion] numpy array serialization with JSON In-Reply-To: <49187658.9050609@noaa.gov> References: <3d375d730811100936r2508584oa9848e377f5e5125@mail.gmail.com> <49187658.9050609@noaa.gov> Message-ID: > If you're thinking JSON, then I think you'd want text, not binary. Maybe > you can make use of the repr()? Last time I checked, repr() does the same thing as str(): the middle of the array may not be displayed... Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From Chris.Barker at noaa.gov Mon Nov 10 13:18:21 2008 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 10 Nov 2008 10:18:21 -0800 Subject: [Numpy-discussion] numpy array serialization with JSON In-Reply-To: References: <3d375d730811100936r2508584oa9848e377f5e5125@mail.gmail.com> <49187658.9050609@noaa.gov> Message-ID: <49187AED.4050206@noaa.gov> Matthieu Brucher wrote: > Last time I checked, repr() does the same thing as str(): the middle > of the array may not be displayed... right. darn -- is that controllable? It also breaks the axum: eval(repr(x)) == x but I guess with big arrays, this in one of those times that: "Practicality beats Purity" is there a pretty print option that could be used? I can't help thinking there must be an array=>text function somewhere in numpy already. If not, there probably should be! -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Mon Nov 10 14:14:51 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 10 Nov 2008 13:14:51 -0600 Subject: [Numpy-discussion] numpy array serialization with JSON In-Reply-To: <49187AED.4050206@noaa.gov> References: <3d375d730811100936r2508584oa9848e377f5e5125@mail.gmail.com> <49187658.9050609@noaa.gov> <49187AED.4050206@noaa.gov> Message-ID: <3d375d730811101114o2e6e1ff1nac1bdba50798a54d@mail.gmail.com> On Mon, Nov 10, 2008 at 12:18, Christopher Barker wrote: > Matthieu Brucher wrote: >> Last time I checked, repr() does the same thing as str(): the middle >> of the array may not be displayed... > > right. darn -- is that controllable? set_printoptions() > It also breaks the axum: > > eval(repr(x)) == x > > but I guess with big arrays, this in one of those times that: > > "Practicality beats Purity" Yup. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at enthought.com Mon Nov 10 14:15:39 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Mon, 10 Nov 2008 13:15:39 -0600 Subject: [Numpy-discussion] numpy array serialization with JSON In-Reply-To: <49187AED.4050206@noaa.gov> References: <3d375d730811100936r2508584oa9848e377f5e5125@mail.gmail.com> <49187658.9050609@noaa.gov> <49187AED.4050206@noaa.gov> Message-ID: <4918885B.1000601@enthought.com> Christopher Barker wrote: > Matthieu Brucher wrote: > >> Last time I checked, repr() does the same thing as str(): the middle >> of the array may not be displayed... >> > > right. darn -- is that controllable? > numpy.set_printoptions(threshold=100000000000) -Travis From Chris.Barker at noaa.gov Mon Nov 10 14:33:42 2008 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 10 Nov 2008 11:33:42 -0800 Subject: [Numpy-discussion] numpy array serialization with JSON In-Reply-To: <4918885B.1000601@enthought.com> References: <3d375d730811100936r2508584oa9848e377f5e5125@mail.gmail.com> <49187658.9050609@noaa.gov> <49187AED.4050206@noaa.gov> <4918885B.1000601@enthought.com> Message-ID: <49188C96.9050402@noaa.gov> Travis E. Oliphant wrote: > numpy.set_printoptions(threshold=100000000000) just to be clear, will: numpy.set_printoptions(threshold=None) restore the default? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From charlesr.harris at gmail.com Mon Nov 10 14:52:27 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 10 Nov 2008 12:52:27 -0700 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: <9457e7c80811051341r5ba8b327id881841f196b67ac@mail.gmail.com> Message-ID: On Sun, Nov 9, 2008 at 11:29 PM, T J wrote: > On Thu, Nov 6, 2008 at 3:01 PM, T J wrote: > > On Thu, Nov 6, 2008 at 2:36 PM, Charles R Harris > > wrote: > >> I could add exp2, log2, and logaddexp2 pretty easily. Almost too easily, > I > >> don't want to clutter up numpy with a lot of functions. However, if > there is > >> a community for these functions I will put them in. > >> > > > > I worry about clutter as well. Note that scipy provides log2 and exp2 > > already (scipy.special). So I think only logaddexp2 would be needed > > and (eventually) logdotexp2. Maybe scipy.special is a better place > > than in numpy? Then perhaps the clutter could be avoided....though > > I'm probably not the best one to ask for advice on this. I will > > definitely use the functions and I suspect many others will as > > well---where ever they are placed. > > Since no one commented further on this, can we go ahead and add > logaddexp2? Once in svn, we can always deal with 'location' later---I > just don't want it to get forgotten. > __ > The functions exp2 and log2 are part of the C99 standard, so I'll add those two along with log21p, exp21m, and logaddexp2. The names log21p and exp21p look a bit creepy so I'm open to suggestions. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Mon Nov 10 15:17:54 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Mon, 10 Nov 2008 14:17:54 -0600 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: <9457e7c80811051341r5ba8b327id881841f196b67ac@mail.gmail.com> Message-ID: <491896F2.5050403@enthought.com> Charles R Harris wrote: > > > On Sun, Nov 9, 2008 at 11:29 PM, T J > wrote: > > On Thu, Nov 6, 2008 at 3:01 PM, T J > wrote: > > On Thu, Nov 6, 2008 at 2:36 PM, Charles R Harris > > > > wrote: > >> I could add exp2, log2, and logaddexp2 pretty easily. Almost > too easily, I > >> don't want to clutter up numpy with a lot of functions. > However, if there is > >> a community for these functions I will put them in. > >> > > > > I worry about clutter as well. Note that scipy provides log2 > and exp2 > > already (scipy.special). So I think only logaddexp2 would be needed > > and (eventually) logdotexp2. Maybe scipy.special is a better place > > than in numpy? Then perhaps the clutter could be avoided....though > > I'm probably not the best one to ask for advice on this. I will > > definitely use the functions and I suspect many others will as > > well---where ever they are placed. > > Since no one commented further on this, can we go ahead and add > logaddexp2? Once in svn, we can always deal with 'location' later---I > just don't want it to get forgotten. > __ > > > The functions exp2 and log2 are part of the C99 standard, so I'll add > those two along with log21p, > exp21m, and logaddexp2. The names log21p and exp21p look a bit creepy > so I'm open to suggestions. I think the C99 standard is a good place to draw the line. We can put other ufuncs in scipy.special -Travis From charlesr.harris at gmail.com Mon Nov 10 19:05:03 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 10 Nov 2008 17:05:03 -0700 Subject: [Numpy-discussion] New ufuncs In-Reply-To: <491896F2.5050403@enthought.com> References: <491896F2.5050403@enthought.com> Message-ID: On Mon, Nov 10, 2008 at 1:17 PM, Travis E. Oliphant wrote: > Charles R Harris wrote: > > > > > > On Sun, Nov 9, 2008 at 11:29 PM, T J > > wrote: > > > > On Thu, Nov 6, 2008 at 3:01 PM, T J > > wrote: > > > On Thu, Nov 6, 2008 at 2:36 PM, Charles R Harris > > > > > > wrote: > > >> I could add exp2, log2, and logaddexp2 pretty easily. Almost > > too easily, I > > >> don't want to clutter up numpy with a lot of functions. > > However, if there is > > >> a community for these functions I will put them in. > > >> > > > > > > I worry about clutter as well. Note that scipy provides log2 > > and exp2 > > > already (scipy.special). So I think only logaddexp2 would be > needed > > > and (eventually) logdotexp2. Maybe scipy.special is a better place > > > than in numpy? Then perhaps the clutter could be avoided....though > > > I'm probably not the best one to ask for advice on this. I will > > > definitely use the functions and I suspect many others will as > > > well---where ever they are placed. > > > > Since no one commented further on this, can we go ahead and add > > logaddexp2? Once in svn, we can always deal with 'location' > later---I > > just don't want it to get forgotten. > > __ > > > > > > The functions exp2 and log2 are part of the C99 standard, so I'll add > > those two along with log21p, > > exp21m, and logaddexp2. The names log21p and exp21p look a bit creepy > > so I'm open to suggestions. > I think the C99 standard is a good place to draw the line. > > We can put other ufuncs in scipy.special > I added log2 and exp2. I still need to do the complex versions. I think logaddexp2 should go in also to compliment these. Note that MPL also defines log2 and their version has slightly different properties, i.e., it returns integer values for integer powers of two. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Nov 10 19:13:59 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 10 Nov 2008 17:13:59 -0700 Subject: [Numpy-discussion] Generalised ufuncs In-Reply-To: <9457e7c80811091646t39f76912n22e6925f01e44b85@mail.gmail.com> References: <9457e7c80811091646t39f76912n22e6925f01e44b85@mail.gmail.com> Message-ID: On Sun, Nov 9, 2008 at 5:46 PM, St?fan van der Walt wrote: > Hi everyone, > > I finally merged the generalised ufunc patches that awaited comment as > the gen_ufuncs branch. Please review the changes on trunk, and let me > know if problems occur. > > One of the tests fail on FreeBSD 64: > > ERROR: Failure: ImportError > > (/tmp/numpy-buildbot/b12/numpy-install/lib/python2.4/site-packages/numpy/core/umath_tests.so: > Undefined symbol "cblas_dgemm") > I see this error also when building without Atlas, it goes away if Atlas is used. It is probably a name/configuration problem. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjhnson at gmail.com Mon Nov 10 19:15:14 2008 From: tjhnson at gmail.com (T J) Date: Mon, 10 Nov 2008 16:15:14 -0800 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: <491896F2.5050403@enthought.com> Message-ID: On Mon, Nov 10, 2008 at 4:05 PM, Charles R Harris wrote: > > I added log2 and exp2. I still need to do the complex versions. I think > logaddexp2 should go in also to compliment these. Same here, especially since logaddexp is present. Or was the idea that both logexpadd and logexpadd2 should be moved to scipy.special? > Note that MPL also defines > log2 and their version has slightly different properties, i.e., it returns > integer values for integer powers of two. > I'm just curious now. Can someone comment on the difference in the implementation just committed versus that in cephes? http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/special/cephes/exp2.c The difference won't matter to me as far as usage goes, but I was curious. From charlesr.harris at gmail.com Mon Nov 10 19:53:14 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 10 Nov 2008 17:53:14 -0700 Subject: [Numpy-discussion] New ufuncs In-Reply-To: References: <491896F2.5050403@enthought.com> Message-ID: On Mon, Nov 10, 2008 at 5:15 PM, T J wrote: > On Mon, Nov 10, 2008 at 4:05 PM, Charles R Harris > wrote: > > > > I added log2 and exp2. I still need to do the complex versions. I think > > logaddexp2 should go in also to compliment these. > > Same here, especially since logaddexp is present. Or was the idea > that both logexpadd and logexpadd2 should be moved to scipy.special? > > > Note that MPL also defines > > log2 and their version has slightly different properties, i.e., it > returns > > integer values for integer powers of two. > > > > I'm just curious now. Can someone comment on the difference in the > implementation just committed versus that in cephes? > > > http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/special/cephes/exp2.c > The difference won't matter to me as far as usage goes, but I was curious. > The version committed uses the distro exp2 if it is available, otherwise it uses exp(log(2)*x). The committed version is also defined for floats and long doubles, while the cephes version is double only. That said, the cephes version uses a rational approximation and ldexp, so is probably faster than exp(log(2)*x). The rational approximation is available for the other precisions (Nash?), so we could use that if it was desireable. I think we could also do better for log2 using frexp if needed. Probably the same with logaddexp2. But that is for later polishing. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Tue Nov 11 03:12:57 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 11 Nov 2008 00:12:57 -0800 Subject: [Numpy-discussion] License review, cont. Message-ID: Now that I have removed all GPL/LGPL code from scipy, I wanted to double check on the licenses of some NumPy code. In particular, 1. FreeBSD license: http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/core/include/numpy/fenv/fenv.c http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/core/include/numpy/fenv/fenv.h 2. Python license: SafeEval class in http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/lib/utils.py Is there any need to look into getting the authors to re-license their code? The license are pretty liberal (and both look like they are compatible with the revised BSD license), but I thought I'd ask anyway. Should we note the additional licenses in: http://projects.scipy.org/scipy/numpy/browser/trunk/LICENSE.txt I was imagining something like: http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/weave/LICENSE.txt Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From robert.kern at gmail.com Tue Nov 11 03:18:03 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 11 Nov 2008 02:18:03 -0600 Subject: [Numpy-discussion] License review, cont. In-Reply-To: References: Message-ID: <3d375d730811110018k4b58196ciab249b76dc26c47b@mail.gmail.com> On Tue, Nov 11, 2008 at 02:12, Jarrod Millman wrote: > Now that I have removed all GPL/LGPL code from scipy, I wanted to > double check on the licenses of some NumPy code. In particular, > > 1. FreeBSD license: > http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/core/include/numpy/fenv/fenv.c > http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/core/include/numpy/fenv/fenv.h Fine. > 2. Python license: > SafeEval class in > http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/lib/utils.py Fine (IMO, but I put that code in; get a second opinion). Users of numpy are already using Python and should be fine with those terms. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rolf.wester at ilt.fraunhofer.de Tue Nov 11 03:24:01 2008 From: rolf.wester at ilt.fraunhofer.de (Rolf Wester) Date: Tue, 11 Nov 2008 09:24:01 +0100 Subject: [Numpy-discussion] numpy, swig and TNT-Arrays Message-ID: <49194121.6090704@ilt.fraunhofer.de> Hi all, I would like to wrap some C++ classes that use TNT-Arrays. Is it possible to pass numpy arrays to C++ functions that expect TNT-Arrays as function parameter? Does anybody know how the wrappers could be generated using swig? I would be very appreciative for any help. With kind regards Rolf From charlesr.harris at gmail.com Tue Nov 11 03:46:52 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 11 Nov 2008 01:46:52 -0700 Subject: [Numpy-discussion] numpy, swig and TNT-Arrays In-Reply-To: <49194121.6090704@ilt.fraunhofer.de> References: <49194121.6090704@ilt.fraunhofer.de> Message-ID: On Tue, Nov 11, 2008 at 1:24 AM, Rolf Wester wrote: > Hi all, > > I would like to wrap some C++ classes that use TNT-Arrays. Is it > possible to pass numpy arrays to C++ functions that expect TNT-Arrays as > function parameter? Does anybody know how the wrappers could be > generated using swig? I would be very appreciative for any help. > > With kind regards > IIRC, TNT does vectors and matrices, they have constructors, and they are contiguous. I think you can make wrappers, but it isn't going to be anything straight forward unless you can reuse the memory from a numpy array and I don't recall that that sort of constructor is available. Is TNT still active? It looked pretty dead last time I looked several years ago. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From rolf.wester at ilt.fraunhofer.de Tue Nov 11 04:19:26 2008 From: rolf.wester at ilt.fraunhofer.de (Rolf Wester) Date: Tue, 11 Nov 2008 10:19:26 +0100 Subject: [Numpy-discussion] numpy, swig and TNT-Arrays In-Reply-To: References: <49194121.6090704@ilt.fraunhofer.de> Message-ID: <49194E1E.9030701@ilt.fraunhofer.de> Charles R Harris wrote: > On Tue, Nov 11, 2008 at 1:24 AM, Rolf Wester > wrote: > >> Hi all, >> >> I would like to wrap some C++ classes that use TNT-Arrays. Is it >> possible to pass numpy arrays to C++ functions that expect TNT-Arrays as >> function parameter? Does anybody know how the wrappers could be >> generated using swig? I would be very appreciative for any help. >> >> With kind regards >> > > IIRC, TNT does vectors and matrices, they have constructors, and they are > contiguous. I think you can make wrappers, but it isn't going to be anything > straight forward unless you can reuse the memory from a numpy array and I > don't recall that that sort of constructor is available. > > Is TNT still active? It looked pretty dead last time I looked several years > ago. > > Chuck > > > > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion TNT has constructors like: TNT::Array1D(int n, double * data) which do not allocate a new C-array but that use "data" as their data-array. I'm not sure whether TNT is still actively maintained, the TNT home page was last modified in 2004, so you are probably right. But the TNT Arrays are just what I need and I know of no alternative. Rolf -- ------------------------------------ # Dr. Rolf Wester # Fraunhofer Institut f. Lasertechnik # Steinbachstrasse 15, D-52074 Aachen, Germany. # Tel: + 49 (0) 241 8906 401, Fax: +49 (0) 241 8906 121 # EMail: rolf.wester at ilt.fraunhofer.de # WWW: http://www.ilt.fraunhofer.de From charlesr.harris at gmail.com Tue Nov 11 05:13:47 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 11 Nov 2008 03:13:47 -0700 Subject: [Numpy-discussion] numpy, swig and TNT-Arrays In-Reply-To: <49194E1E.9030701@ilt.fraunhofer.de> References: <49194121.6090704@ilt.fraunhofer.de> <49194E1E.9030701@ilt.fraunhofer.de> Message-ID: On Tue, Nov 11, 2008 at 2:19 AM, Rolf Wester wrote: > Charles R Harris wrote: > > On Tue, Nov 11, 2008 at 1:24 AM, Rolf Wester > > wrote: > > > >> Hi all, > >> > >> I would like to wrap some C++ classes that use TNT-Arrays. Is it > >> possible to pass numpy arrays to C++ functions that expect TNT-Arrays as > >> function parameter? Does anybody know how the wrappers could be > >> generated using swig? I would be very appreciative for any help. > >> > >> With kind regards > >> > > > > IIRC, TNT does vectors and matrices, they have constructors, and they are > > contiguous. I think you can make wrappers, but it isn't going to be > anything > > straight forward unless you can reuse the memory from a numpy array and I > > don't recall that that sort of constructor is available. > > > > Is TNT still active? It looked pretty dead last time I looked several > years > > ago. > > > > Chuck > > > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > TNT has constructors like: > > TNT::Array1D(int n, double * data) > > which do not allocate a new C-array but that use "data" as their > data-array. > I don't think there is any easy way to do what you want without writing some code somewhere along the line. You can expose the C++ functions and TNT to python, but to use numpy arrays you will need some way to get the data back and forth between TNT arrays and numpy arrays. I suspect you will end up just copying data into TNT arrays, calling your function, and then copying data back out of the result. Cython might be an alternative to swig for that. It would help to have a better idea of what you want to do. Do you just want to wrap an existing bunch of functions that use TNT? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Tue Nov 11 05:15:05 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 11 Nov 2008 11:15:05 +0100 Subject: [Numpy-discussion] numpy, swig and TNT-Arrays In-Reply-To: <49194E1E.9030701@ilt.fraunhofer.de> References: <49194121.6090704@ilt.fraunhofer.de> <49194E1E.9030701@ilt.fraunhofer.de> Message-ID: Hi, Yes, you can, but it can be tricky. What you may need to do is to check if TNT is capable of accepting an array by pointer without handling the memory (delete when the array is destroyed). If there are tools to do this, then it will be easy. If not, you will have to add a specific handler either in SWIG (copy the array in the in typemap and then copy it back again when the function is finished) or in TNT. If you need to do the latter, you can have some tips on my blog (http://matt.eifelle.com/2007/11/06/wrapping-a-c-container-in-python/). Matthieu 2008/11/11 Rolf Wester : > Charles R Harris wrote: >> On Tue, Nov 11, 2008 at 1:24 AM, Rolf Wester >> wrote: >> >>> Hi all, >>> >>> I would like to wrap some C++ classes that use TNT-Arrays. Is it >>> possible to pass numpy arrays to C++ functions that expect TNT-Arrays as >>> function parameter? Does anybody know how the wrappers could be >>> generated using swig? I would be very appreciative for any help. >>> >>> With kind regards >>> >> >> IIRC, TNT does vectors and matrices, they have constructors, and they are >> contiguous. I think you can make wrappers, but it isn't going to be anything >> straight forward unless you can reuse the memory from a numpy array and I >> don't recall that that sort of constructor is available. >> >> Is TNT still active? It looked pretty dead last time I looked several years >> ago. >> >> Chuck >> >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion > > TNT has constructors like: > > TNT::Array1D(int n, double * data) > > which do not allocate a new C-array but that use "data" as their data-array. > > I'm not sure whether TNT is still actively maintained, the TNT home page > was last modified in 2004, so you are probably right. But the TNT Arrays > are just what I need and I know of no alternative. > > Rolf > -- > ------------------------------------ > # Dr. Rolf Wester > # Fraunhofer Institut f. Lasertechnik > # Steinbachstrasse 15, D-52074 Aachen, Germany. > # Tel: + 49 (0) 241 8906 401, Fax: +49 (0) 241 8906 121 > # EMail: rolf.wester at ilt.fraunhofer.de > # WWW: http://www.ilt.fraunhofer.de > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From rolf.wester at ilt.fraunhofer.de Tue Nov 11 05:28:21 2008 From: rolf.wester at ilt.fraunhofer.de (Rolf Wester) Date: Tue, 11 Nov 2008 11:28:21 +0100 Subject: [Numpy-discussion] numpy, swig and TNT-Arrays In-Reply-To: References: <49194121.6090704@ilt.fraunhofer.de> <49194E1E.9030701@ilt.fraunhofer.de> Message-ID: <49195E45.4090204@ilt.fraunhofer.de> Charles R Harris wrote: > On Tue, Nov 11, 2008 at 2:19 AM, Rolf Wester > wrote: > >> Charles R Harris wrote: >>> On Tue, Nov 11, 2008 at 1:24 AM, Rolf Wester >>> wrote: >>> >>>> Hi all, >>>> >>>> I would like to wrap some C++ classes that use TNT-Arrays. Is it >>>> possible to pass numpy arrays to C++ functions that expect TNT-Arrays as >>>> function parameter? Does anybody know how the wrappers could be >>>> generated using swig? I would be very appreciative for any help. >>>> >>>> With kind regards >>>> >>> IIRC, TNT does vectors and matrices, they have constructors, and they are >>> contiguous. I think you can make wrappers, but it isn't going to be >> anything >>> straight forward unless you can reuse the memory from a numpy array and I >>> don't recall that that sort of constructor is available. >>> >>> Is TNT still active? It looked pretty dead last time I looked several >> years >>> ago. >>> >>> Chuck >>> >>> >>> >>> ------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> Numpy-discussion mailing list >>> Numpy-discussion at scipy.org >>> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> TNT has constructors like: >> >> TNT::Array1D(int n, double * data) >> >> which do not allocate a new C-array but that use "data" as their >> data-array. >> > > I don't think there is any easy way to do what you want without writing some > code somewhere along the line. You can expose the C++ functions and TNT to > python, but to use numpy arrays you will need some way to get the data back > and forth between TNT arrays and numpy arrays. I suspect you will end up > just copying data into TNT arrays, calling your function, and then copying > data back out of the result. Cython might be an alternative to swig for > that. > > It would help to have a better idea of what you want to do. Do you just want > to wrap an existing bunch of functions that use TNT? > > Chuck > > > > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion It's my own code so I have the choice how to do it. Until now I used the typemaps defined in numpy.i, so I had either to use the 1-dimensional arrays even in case of multidimensional data or to copy the data. I wondered wether there is a more elegant way of using numpy arrays on the python side and TNT::Arrays on the C++ side without having to explicitely write extra code. Rolf -- ------------------------------------ # Dr. Rolf Wester # Fraunhofer Institut f. Lasertechnik # Steinbachstrasse 15, D-52074 Aachen, Germany. # Tel: + 49 (0) 241 8906 401, Fax: +49 (0) 241 8906 121 # EMail: rolf.wester at ilt.fraunhofer.de # WWW: http://www.ilt.fraunhofer.de From hoytak at cs.ubc.ca Tue Nov 11 12:12:19 2008 From: hoytak at cs.ubc.ca (Hoyt Koepke) Date: Tue, 11 Nov 2008 09:12:19 -0800 Subject: [Numpy-discussion] numpy, swig and TNT-Arrays In-Reply-To: <49195E45.4090204@ilt.fraunhofer.de> References: <49194121.6090704@ilt.fraunhofer.de> <49194E1E.9030701@ilt.fraunhofer.de> <49195E45.4090204@ilt.fraunhofer.de> Message-ID: <4db580fd0811110912i2832d22cy9136d688ad0fcdf2@mail.gmail.com> Hi Rolf, Just curious -- have you considered using the blitz++ library (http://www.oonumerics.org/blitz/)? There seems to be a lot of overlap in terms of functionality. If you use blitz++, it's largely included in scipy as part of weave. Additionally, I already have code that generates wrappers to functions taking such arrays using weave. If blitz++ would work, I'll send them to you. -- Hoyt On Tue, Nov 11, 2008 at 2:28 AM, Rolf Wester wrote: > Charles R Harris wrote: >> On Tue, Nov 11, 2008 at 2:19 AM, Rolf Wester >> wrote: >> >>> Charles R Harris wrote: >>>> On Tue, Nov 11, 2008 at 1:24 AM, Rolf Wester >>>> wrote: >>>> >>>>> Hi all, >>>>> >>>>> I would like to wrap some C++ classes that use TNT-Arrays. Is it >>>>> possible to pass numpy arrays to C++ functions that expect TNT-Arrays as >>>>> function parameter? Does anybody know how the wrappers could be >>>>> generated using swig? I would be very appreciative for any help. >>>>> >>>>> With kind regards >>>>> >>>> IIRC, TNT does vectors and matrices, they have constructors, and they are >>>> contiguous. I think you can make wrappers, but it isn't going to be >>> anything >>>> straight forward unless you can reuse the memory from a numpy array and I >>>> don't recall that that sort of constructor is available. >>>> >>>> Is TNT still active? It looked pretty dead last time I looked several >>> years >>>> ago. >>>> >>>> Chuck >>>> >>>> >>>> >>>> ------------------------------------------------------------------------ >>>> >>>> _______________________________________________ >>>> Numpy-discussion mailing list >>>> Numpy-discussion at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/numpy-discussion >>> TNT has constructors like: >>> >>> TNT::Array1D(int n, double * data) >>> >>> which do not allocate a new C-array but that use "data" as their >>> data-array. >>> >> >> I don't think there is any easy way to do what you want without writing some >> code somewhere along the line. You can expose the C++ functions and TNT to >> python, but to use numpy arrays you will need some way to get the data back >> and forth between TNT arrays and numpy arrays. I suspect you will end up >> just copying data into TNT arrays, calling your function, and then copying >> data back out of the result. Cython might be an alternative to swig for >> that. >> >> It would help to have a better idea of what you want to do. Do you just want >> to wrap an existing bunch of functions that use TNT? >> >> Chuck >> >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion > > It's my own code so I have the choice how to do it. Until now I used the > typemaps defined in numpy.i, so I had either to use the 1-dimensional > arrays even in case of multidimensional data or to copy the data. I > wondered wether there is a more elegant way of using numpy arrays on the > python side and TNT::Arrays on the C++ side without having to > explicitely write extra code. > > > Rolf > > -- > ------------------------------------ > # Dr. Rolf Wester > # Fraunhofer Institut f. Lasertechnik > # Steinbachstrasse 15, D-52074 Aachen, Germany. > # Tel: + 49 (0) 241 8906 401, Fax: +49 (0) 241 8906 121 > # EMail: rolf.wester at ilt.fraunhofer.de > # WWW: http://www.ilt.fraunhofer.de > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- +++++++++++++++++++++++++++++++++++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ hoytak at gmail.com +++++++++++++++++++++++++++++++++++ From philbinj at gmail.com Tue Nov 11 12:33:26 2008 From: philbinj at gmail.com (James Philbin) Date: Tue, 11 Nov 2008 17:33:26 +0000 Subject: [Numpy-discussion] Numpy.test() hangs Message-ID: <2b1c8c4f0811110933y1b4f3056i4ed14e6b583777d6@mail.gmail.com> Python 2.5.2 (r252:60911, Oct 5 2008, 19:29:17) [GCC 4.3.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '1.3.0.dev6005' >>> numpy.test(verbosity=2) ... test_umath.TestLogAddExp.test_logaddexp_values ... The test hangs at the last line and never progresses further. James From charlesr.harris at gmail.com Tue Nov 11 13:00:42 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 11 Nov 2008 11:00:42 -0700 Subject: [Numpy-discussion] Numpy.test() hangs In-Reply-To: <2b1c8c4f0811110933y1b4f3056i4ed14e6b583777d6@mail.gmail.com> References: <2b1c8c4f0811110933y1b4f3056i4ed14e6b583777d6@mail.gmail.com> Message-ID: On Tue, Nov 11, 2008 at 10:33 AM, James Philbin wrote: > Python 2.5.2 (r252:60911, Oct 5 2008, 19:29:17) > [GCC 4.3.2] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > >>> numpy.__version__ > '1.3.0.dev6005' > >>> numpy.test(verbosity=2) > ... > test_umath.TestLogAddExp.test_logaddexp_values ... > > The test hangs at the last line and never progresses further. > Ah good, some of the buildbots are doing this also and I'm trying to track it down. I don't see it on my machine. Could you post your compiler version? I suspect a library problem. Meanwhile I'll comment out the offending tests... and add some that might hang in more informative places. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Nov 11 13:13:21 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 11 Nov 2008 11:13:21 -0700 Subject: [Numpy-discussion] Numpy.test() hangs In-Reply-To: References: <2b1c8c4f0811110933y1b4f3056i4ed14e6b583777d6@mail.gmail.com> Message-ID: On Tue, Nov 11, 2008 at 11:00 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Tue, Nov 11, 2008 at 10:33 AM, James Philbin wrote: > >> Python 2.5.2 (r252:60911, Oct 5 2008, 19:29:17) >> [GCC 4.3.2] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >> >>> import numpy >> >>> numpy.__version__ >> '1.3.0.dev6005' >> >>> numpy.test(verbosity=2) >> ... >> test_umath.TestLogAddExp.test_logaddexp_values ... >> >> The test hangs at the last line and never progresses further. >> > > Ah good, some of the buildbots are doing this also and I'm trying to track > it down. I don't see it on my machine. Could you post your compiler version? > I suspect a library problem. Meanwhile I'll comment out the offending > tests... and add some that might hang in more informative places. > Oops, I see your compiler/python version is already there. Mine are Python 2.5.1 (r251:54863, Jun 15 2008, 18:24:51) [GCC 4.3.0 20080428 (Red Hat 4.3.0-8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. Can you try checking the functions log1p and exp separately for all three floating types? Something like >>> import numpy as np >>> np.log1p(np.ones(1, dtype='f')*3) array([ 1.38629436], dtype=float32) >>> np.log1p(np.ones(1, dtype='d')*3) array([ 1.38629436]) >>> np.log1p(np.ones(1, dtype='g')*3) array([1.3862944], dtype=float96) And the same with the exponential function. Chuck > > Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Nov 11 13:32:13 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 11 Nov 2008 11:32:13 -0700 Subject: [Numpy-discussion] Numpy.test() hangs In-Reply-To: References: <2b1c8c4f0811110933y1b4f3056i4ed14e6b583777d6@mail.gmail.com> Message-ID: On Tue, Nov 11, 2008 at 11:13 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Tue, Nov 11, 2008 at 11:00 AM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Tue, Nov 11, 2008 at 10:33 AM, James Philbin wrote: >> >>> Python 2.5.2 (r252:60911, Oct 5 2008, 19:29:17) >>> [GCC 4.3.2] on linux2 >>> Type "help", "copyright", "credits" or "license" for more information. >>> >>> import numpy >>> >>> numpy.__version__ >>> '1.3.0.dev6005' >>> >>> numpy.test(verbosity=2) >>> ... >>> test_umath.TestLogAddExp.test_logaddexp_values ... >>> >>> The test hangs at the last line and never progresses further. >>> >> >> Ah good, some of the buildbots are doing this also and I'm trying to track >> it down. I don't see it on my machine. Could you post your compiler version? >> I suspect a library problem. Meanwhile I'll comment out the offending >> tests... and add some that might hang in more informative places. >> > > Oops, I see your compiler/python version is already there. Mine are > > Python 2.5.1 (r251:54863, Jun 15 2008, 18:24:51) > [GCC 4.3.0 20080428 (Red Hat 4.3.0-8)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > > Can you try checking the functions log1p and exp separately for all three > floating types? Something like > > >>> import numpy as np > >>> np.log1p(np.ones(1, dtype='f')*3) > array([ 1.38629436], dtype=float32) > >>> np.log1p(np.ones(1, dtype='d')*3) > array([ 1.38629436]) > >>> np.log1p(np.ones(1, dtype='g')*3) > array([1.3862944], dtype=float96) > > > And the same with the exponential function. > It looks like log1pf is borked on some machines. Do you have any CFLAGS in your environment? Meanwhile, the bsd buildbot has suddenly lost the ability to find csqrt in linalg... It's Halloween all over again. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From philbinj at gmail.com Tue Nov 11 14:41:31 2008 From: philbinj at gmail.com (James Philbin) Date: Tue, 11 Nov 2008 19:41:31 +0000 Subject: [Numpy-discussion] Numpy.test() hangs In-Reply-To: References: <2b1c8c4f0811110933y1b4f3056i4ed14e6b583777d6@mail.gmail.com> Message-ID: <2b1c8c4f0811111141o6a0a7055yba613b90fb9c4192@mail.gmail.com> > Can you try checking the functions log1p and exp separately for all three > floating types? Something like Well, log1p seems to be the culprit: >>> import numpy as np >>> np.log1p(np.ones(1,dtype='f')*3) ... hangs here ... exp is fine: >>> import numpy as np >>> np.exp(np.ones(1,dtype='f')*3) array([ 20.08553696], dtype=float32) If it helps: $ uname -a Linux lewis 2.6.27-7-generic #1 SMP Tue Nov 4 19:33:06 UTC 2008 x86_64 GNU/Linux James From charlesr.harris at gmail.com Tue Nov 11 15:00:04 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 11 Nov 2008 13:00:04 -0700 Subject: [Numpy-discussion] Numpy.test() hangs In-Reply-To: <2b1c8c4f0811111141o6a0a7055yba613b90fb9c4192@mail.gmail.com> References: <2b1c8c4f0811110933y1b4f3056i4ed14e6b583777d6@mail.gmail.com> <2b1c8c4f0811111141o6a0a7055yba613b90fb9c4192@mail.gmail.com> Message-ID: On Tue, Nov 11, 2008 at 12:41 PM, James Philbin wrote: > > Can you try checking the functions log1p and exp separately for all three > > floating types? Something like > > Well, log1p seems to be the culprit: > > >>> import numpy as np > >>> np.log1p(np.ones(1,dtype='f')*3) > ... hangs here ... > > exp is fine: > > >>> import numpy as np > >>> np.exp(np.ones(1,dtype='f')*3) > array([ 20.08553696], dtype=float32) > > If it helps: > $ uname -a > Linux lewis 2.6.27-7-generic #1 SMP Tue Nov 4 19:33:06 UTC 2008 x86_64 > GNU/Linux > My guess is that this is a libm/gcc problem on x86_64, perhaps depending on the flags libm was compiled with. What distro are you using? I'm not sure how to handle this without some ugly voodoo in the numpy build process. We currently check if the library functions are present, but we have no way to mark them as present but buggy. Can you try plain old log/log10 also? I'll try to put together some c code you can use to check things also so that you can file a bug report with the distro if the problem persists. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From philbinj at gmail.com Tue Nov 11 15:16:52 2008 From: philbinj at gmail.com (James Philbin) Date: Tue, 11 Nov 2008 20:16:52 +0000 Subject: [Numpy-discussion] Numpy.test() hangs In-Reply-To: <2b1c8c4f0811111208h65b39dcexea9fb37a78fa695e@mail.gmail.com> References: <2b1c8c4f0811110933y1b4f3056i4ed14e6b583777d6@mail.gmail.com> <2b1c8c4f0811111141o6a0a7055yba613b90fb9c4192@mail.gmail.com> <2b1c8c4f0811111208h65b39dcexea9fb37a78fa695e@mail.gmail.com> Message-ID: <2b1c8c4f0811111216re69a2bdg10a3ca81f06601f6@mail.gmail.com> Hmmm... So I examined an objdump of umath.so: objdump -d /usr/lib/python2.5/site-packages/numpy/core/umath.so > umath.asm The relevant lines are here: --- 00000000000292c0 : 292c0: e9 fb ff ff ff jmpq 292c0 292c5: 66 66 2e 0f 1f 84 00 nopw %cs:0x0(%rax,%rax,1) 292cc: 00 00 00 00 --- Not sure if i'm reading this correctly, but the first line seems to be an unconditional jump to itself, hence an infinite loop? James From philbinj at gmail.com Tue Nov 11 15:08:45 2008 From: philbinj at gmail.com (James Philbin) Date: Tue, 11 Nov 2008 20:08:45 +0000 Subject: [Numpy-discussion] Numpy.test() hangs In-Reply-To: References: <2b1c8c4f0811110933y1b4f3056i4ed14e6b583777d6@mail.gmail.com> <2b1c8c4f0811111141o6a0a7055yba613b90fb9c4192@mail.gmail.com> Message-ID: <2b1c8c4f0811111208h65b39dcexea9fb37a78fa695e@mail.gmail.com> > My guess is that this is a libm/gcc problem on x86_64, perhaps depending on > the flags libm was compiled with. What distro are you using? Ubuntu 8.10 amd64 > Can you try plain old log/log10 also? I'll try to put together some c code > you can use to check things also so that you can file a bug report with the > distro if the problem persists. log/log10 are fine: In [4]: numpy.log(numpy.array([3],dtype='f')) Out[4]: array([ 1.09861231], dtype=float32) In [5]: numpy.log10(numpy.array([3],dtype='f')) Out[5]: array([ 0.47712123], dtype=float32) I tried the following c code which also runs fine: --- #include #include int main() { double r = log1p((double)1.0); printf("%f\n", r); return 0; } --- Compiled with gcc -g -O2 log1ptest.c -o log1ptest. Maybe it's only triggered on some cryptic combination of cflags? It's not a function i'm likely to every use, but i'm curious to know what's wrong. James From simon.palmer at gmail.com Tue Nov 11 15:41:37 2008 From: simon.palmer at gmail.com (Simon Palmer) Date: Tue, 11 Nov 2008 20:41:37 +0000 Subject: [Numpy-discussion] numpy array serialization with JSON In-Reply-To: <49187658.9050609@noaa.gov> References: <3d375d730811100936r2508584oa9848e377f5e5125@mail.gmail.com> <49187658.9050609@noaa.gov> Message-ID: "Does JSON have a representation for n-d arrays? In my little work with it, it looked pretty lame for arrays of number, so I'd be surprised." yes it does, thet are just treated as nested lists and the square bracket notation is used. JSON is far from perfect but for objects of basic types it is about as efficient as you can get if you need a generalised human readable form, it certainly beats verbose XML. simplejson is actually a pretty good package, it's just a shame there's nothing for numpy ndarrays. If I end up writing one I'll contribute it back either here or probably better to simplejson. On Mon, Nov 10, 2008 at 5:58 PM, Christopher Barker wrote: > Simon Palmer wrote: > > What, if any, header information from numarray gets put in the bytes by > > tostring(), especially as I have n dimensions? > > none, you'd have to do that separately. > > > I am very likely to be deserializing into a Java Array object (or maybe > > a double[]) and it is not clear to me how I would do that from the bytes > > in the tostring() representation. > > If you're thinking JSON, then I think you'd want text, not binary. Maybe > you can make use of the repr()? > > It wouldn't be hard to just right your own text renderer from scratch, > unless you need C speed. Python being python, I'd tend to see what your > Java (or Javascript) code can easily consume, then write the python code > to generate that. > > > > -Chris > > -- > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Nov 11 16:25:22 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 11 Nov 2008 14:25:22 -0700 Subject: [Numpy-discussion] Numpy.test() hangs In-Reply-To: <2b1c8c4f0811111216re69a2bdg10a3ca81f06601f6@mail.gmail.com> References: <2b1c8c4f0811110933y1b4f3056i4ed14e6b583777d6@mail.gmail.com> <2b1c8c4f0811111141o6a0a7055yba613b90fb9c4192@mail.gmail.com> <2b1c8c4f0811111208h65b39dcexea9fb37a78fa695e@mail.gmail.com> <2b1c8c4f0811111216re69a2bdg10a3ca81f06601f6@mail.gmail.com> Message-ID: On Tue, Nov 11, 2008 at 1:16 PM, James Philbin wrote: > Hmmm... So I examined an objdump of umath.so: > objdump -d /usr/lib/python2.5/site-packages/numpy/core/umath.so > umath.asm > > The relevant lines are here: > --- > 00000000000292c0 : > 292c0: e9 fb ff ff ff jmpq 292c0 > 292c5: 66 66 2e 0f 1f 84 00 nopw %cs:0x0(%rax,%rax,1) > 292cc: 00 00 00 00 > --- > > Not sure if i'm reading this correctly, but the first line seems to be > an unconditional jump to itself, hence an infinite loop? > Hmm... I'm fishing now, but I think the current configuration doesn't find log1pf in the library even though it is there. Before updating from svn, could you try going into numpy/core/src/math_c99.inc.src line 219 and put "static" in the function definition? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Nov 11 17:00:21 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 11 Nov 2008 15:00:21 -0700 Subject: [Numpy-discussion] Numpy.test() hangs In-Reply-To: References: <2b1c8c4f0811110933y1b4f3056i4ed14e6b583777d6@mail.gmail.com> <2b1c8c4f0811111141o6a0a7055yba613b90fb9c4192@mail.gmail.com> <2b1c8c4f0811111208h65b39dcexea9fb37a78fa695e@mail.gmail.com> <2b1c8c4f0811111216re69a2bdg10a3ca81f06601f6@mail.gmail.com> Message-ID: On Tue, Nov 11, 2008 at 2:25 PM, Charles R Harris wrote: > > > On Tue, Nov 11, 2008 at 1:16 PM, James Philbin wrote: > >> Hmmm... So I examined an objdump of umath.so: >> objdump -d /usr/lib/python2.5/site-packages/numpy/core/umath.so > >> umath.asm >> >> The relevant lines are here: >> --- >> 00000000000292c0 : >> 292c0: e9 fb ff ff ff jmpq 292c0 >> 292c5: 66 66 2e 0f 1f 84 00 nopw %cs:0x0(%rax,%rax,1) >> 292cc: 00 00 00 00 >> --- >> >> Not sure if i'm reading this correctly, but the first line seems to be >> an unconditional jump to itself, hence an infinite loop? >> > > Hmm... I'm fishing now, but I think the current configuration doesn't find > log1pf in the library even though it is there. Before updating from svn, > could you try going into numpy/core/src/math_c99.inc.src line 219 and put > "static" in the function definition? > I think this is now fixed in svn, I'm trying to see if static fixes the problem with the old buggy version. What optimization level is numpy being compiled with? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From philbinj at gmail.com Tue Nov 11 18:13:52 2008 From: philbinj at gmail.com (James Philbin) Date: Tue, 11 Nov 2008 23:13:52 +0000 Subject: [Numpy-discussion] Numpy.test() hangs In-Reply-To: References: <2b1c8c4f0811110933y1b4f3056i4ed14e6b583777d6@mail.gmail.com> <2b1c8c4f0811111141o6a0a7055yba613b90fb9c4192@mail.gmail.com> <2b1c8c4f0811111208h65b39dcexea9fb37a78fa695e@mail.gmail.com> <2b1c8c4f0811111216re69a2bdg10a3ca81f06601f6@mail.gmail.com> Message-ID: <2b1c8c4f0811111513r562ab4bbndd5360552cf55ca7@mail.gmail.com> > I think this is now fixed in svn, I'm trying to see if static fixes the > problem with the old buggy version. What optimization level is numpy being > compiled with? Still a problem here: In [1]: import numpy as np In [2]: np.__version__ Out[2]: '1.3.0.dev6011' In [3]: np.log1p(np.array([1],dtype='f')) ... hangs ... I'm not exactly sure how to get the exact numpy cflags (the gcc invocation is hidden from me), but setup.py outputs the following at various points: C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC I'm haven't added any cflags, just running 'python setup.py build'. James From mike.ressler at alum.mit.edu Tue Nov 11 18:19:42 2008 From: mike.ressler at alum.mit.edu (Mike Ressler) Date: Tue, 11 Nov 2008 15:19:42 -0800 Subject: [Numpy-discussion] Bugs in histogram and matplotlib-hist Message-ID: <268febdf0811111519l3a5a73an5ae0d0154ad0d0d@mail.gmail.com> I did an update to a Fedora 9 workstation yesterday that included updating numpy to 1.2.0 and matplotlib 0.98.3 (python version is 2.5.1). This seems to have played havoc with some of the histogram plotting we do. I was aware of the histogram changes in 1.2.0, but something doesn't seem to have worked out right. First issue is that the histogram function doesn't like giving "normed" any sort of a value. We get a "'NoneType' object is not iterable" error whenever we tried. I could work around this by editing function_base.py and given the normed keyword in the histogram function a default value of False, then changing the last few lines of histogram to read if not normed: return n, bins else : db = array(np.diff(bins), float) return n/(n*db).sum(), bins (i.e. using normed as a true boolean, rather than a testable variable). That seems to have fixed up that particular issue. I know this isn't quite the right place to report matplotlib errors, but the hist function expects the output of the histogram function, the left edges of the bins, and the frequency values, to be the same length. Histogram now returns all edges (e.g. 100), while the frequencies correspond to the number of bins (e.g. 99). Are these known issues? I didn't see anything in the numpy-1.2.1 release notes to indicate that this was addressed. I guess I should ask whether other people have even seen this? I can submit patches after I think a bit harder about proper fixes, but I'm not a programmer and anything I write should be considered highly suspect. Please advise. Thanks. Mike -- mike.ressler at alum.mit.edu From charlesr.harris at gmail.com Tue Nov 11 18:23:48 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 11 Nov 2008 16:23:48 -0700 Subject: [Numpy-discussion] Numpy.test() hangs In-Reply-To: <2b1c8c4f0811111513r562ab4bbndd5360552cf55ca7@mail.gmail.com> References: <2b1c8c4f0811110933y1b4f3056i4ed14e6b583777d6@mail.gmail.com> <2b1c8c4f0811111141o6a0a7055yba613b90fb9c4192@mail.gmail.com> <2b1c8c4f0811111208h65b39dcexea9fb37a78fa695e@mail.gmail.com> <2b1c8c4f0811111216re69a2bdg10a3ca81f06601f6@mail.gmail.com> <2b1c8c4f0811111513r562ab4bbndd5360552cf55ca7@mail.gmail.com> Message-ID: On Tue, Nov 11, 2008 at 4:13 PM, James Philbin wrote: > > I think this is now fixed in svn, I'm trying to see if static fixes the > > problem with the old buggy version. What optimization level is numpy > being > > compiled with? > Still a problem here: > > In [1]: import numpy as np > > In [2]: np.__version__ > Out[2]: '1.3.0.dev6011' > > In [3]: np.log1p(np.array([1],dtype='f')) > ... hangs ... > > > I'm not exactly sure how to get the exact numpy cflags (the gcc > invocation is hidden from me), but setup.py outputs the following at > various points: > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 > -Wall -Wstrict-prototypes -fPIC > > I'm haven't added any cflags, just running 'python setup.py build'. > It's working on the buildbots. Did you remove the build directory first? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From philbinj at gmail.com Tue Nov 11 18:44:07 2008 From: philbinj at gmail.com (James Philbin) Date: Tue, 11 Nov 2008 23:44:07 +0000 Subject: [Numpy-discussion] Numpy.test() hangs In-Reply-To: References: <2b1c8c4f0811110933y1b4f3056i4ed14e6b583777d6@mail.gmail.com> <2b1c8c4f0811111141o6a0a7055yba613b90fb9c4192@mail.gmail.com> <2b1c8c4f0811111208h65b39dcexea9fb37a78fa695e@mail.gmail.com> <2b1c8c4f0811111216re69a2bdg10a3ca81f06601f6@mail.gmail.com> <2b1c8c4f0811111513r562ab4bbndd5360552cf55ca7@mail.gmail.com> Message-ID: <2b1c8c4f0811111544s35d556f1m4f712accc20a148a@mail.gmail.com> > It's working on the buildbots. Did you remove the build directory first? Oops. Great, all working now! James From dave.seattledl at gmail.com Tue Nov 11 20:37:19 2008 From: dave.seattledl at gmail.com (Dave Lang) Date: Tue, 11 Nov 2008 17:37:19 -0800 Subject: [Numpy-discussion] f2py call-back and newbie questions Message-ID: <83cb843f0811111737ncd2e718wbd124dcb2a37ce58@mail.gmail.com> Hello, I apologize for this long listing, but I have a lot of pent-up f2py queries occupying my mind [?] I realize that f2py is a non-commercial service to the community, and am highly impressed that it exists at all. I can only offer in return my own assistance to others (say via this list), after I have crossed some of these bridges and can in turn assist others who venture in. I am new to this list, as well as Python and am trying to use the f2py call-back feature . I am embarking upon a rather large Fortran->Python (80,000 line code) conversion for a dynamics simulation system ( http://home.comcast.net/~GTOSS/), and will be willing to share all knowledge of how this goes. I am converting code by evolving from 100% F90 -> xxx % Python starting at the "results-data-base/post-processing/interpretive-manipulation" side of the existing code. So far, I can successfully create a wrap that allows me to call a Fortran program from Python, but can't successfully do the reverse call-back scheme. I think I have located just about every documentation available, including numerous versions of some, but I have questions that I can't seem to resolve through this documentation. At the end of this request is shown the actual code (it is very simple). My system...... Ubuntu 8.0.4; GNU C++/g95 Compiler (current release) Python 2.5.2; Numpy 1.0.4; f2py 2.4422 The Problem..... The symptom is: Depending upon how I invoke f2py to create the wrap, one of 2 failures occur: 1. At Python execution, Python insists that my arguments to the Fortran subroutine (CBSET) are incorrect (I use 2, it insists on 1), OR, 2. Python accepts my 2 arguments to the Fortran subroutine, and when I execute Python, I end up getting a segmentation fault when I attempt to make the call-back from Fortran. This would normally tell me that likely I have some kind of argument issue (although the call-back routine defn itself has NO arguments...although the "external" attribute routine name does appear as an argument in the subroutine at the INSISTENCE of the g95 compiler). General questions (maybe related to the problem, maybe not).... 1. I never get a "makefile" created by running f2py. Why? 2. I never get a module document (ie. "__xxx__") created by running f2py (although I do get the ".pyf" sig-file). Why? 3. Is the Python "call-back" program code supposed to reside in the Python program that originally makes the Python -> Fortran wrap-connection (I assumed it is)? 4. Does the "XXXX.so" file need to be linked into the Fortran code via a "makefile" operation to successfully "call-back"? 5. Where can I find some simple, but illustrative examples of using f2py to "wrap a Fortran 90 data Class-definition"? I need to understand how the f2py calls would look, and what is manifest with in the Python code to actually lay hands on the elements of the Fortran "derived data types". In general, it would be nice to find a detailed example of: a python program, a corresponding consistent Fortran program that combines to demonstrate the call-back scheme (and that one can type in, and will actually function correctly), and the preferred (ie. correct?) f2py statements that need to be executed to accomplish this (I would be glad to contribute same to the body of knowledge, if I can ever get it to work). I will be most grateful for comments on any or all of this. thanks Dave Lang The Fortran Code: ! PROGAM TO DEMONSTRATE GTOSS FORTRAN -> PYTHON CONNECTION SUBROUTINE CBSET( N, call_back ) !f2py intent(in) N INTEGER N !f2py intent(callback) call_back EXTERNAL call_back ! IDENTIFY THAT FORTRAN IS EXECUTING & IDENTIFY WHY CBSET IS CALLED PRINT *,'Enter CBSET.F, N=', N IF(N .EQ. 1) THEN PRINT *,' in CBSET: this Call originated to test call-back' PRINT *,' in CBSET: Make callback to PYTHON...EXPECT reply' CALL call_back() PRINT *,' in CBSET: Returned from call-back to PYTHON' END IF IF(N .EQ. 2) THEN PRINT *,' in CBSET: Call came from Python to test Py->For' END IF RETURN END *The Python Program:* import os import foo def main(): print 'Hello, pymain is about to call CBSET to test f2py operation' #satisfy arg list for cbset ????? dummy=call_back #test "Python --> Fortran" (nominal wrap) from Python n = 2 foo.cbset(n, dummy) #test "Fortran --> Python" (call-back) scheme as seen from Python n = 1 foo.cbset(n, dummy) # define call-back routine def call_back(): print '...Confirming...."call_back"is entered from CBSET call' # end of call-back defn if __name__ =='__main__': main() NOTE: depending upon how f2py is invoked to create the wrap, a disagreement can be experienced between Python and Fortran as to "how many args the F90 program 'CBSET' really exhibits"; in one case, Python aborts at execution, insisting that "CBSET has exactly 1 argument", The other type event is that Python is happy with the code as shown, but Fortran gets a runtime "Segmentation fault" when the "call_back" is executed. BTW, at compile, Fortran is NEVER happy UNLESS the EXTERNAL routine named "call_back" actually appears as an arg in the CBSET subroutine definition. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 361.gif Type: image/gif Size: 226 bytes Desc: not available URL: From Chris.Barker at noaa.gov Tue Nov 11 21:01:50 2008 From: Chris.Barker at noaa.gov (Chris.Barker) Date: Tue, 11 Nov 2008 18:01:50 -0800 Subject: [Numpy-discussion] numpy array serialization with JSON In-Reply-To: References: <3d375d730811100936r2508584oa9848e377f5e5125@mail.gmail.com> <49187658.9050609@noaa.gov> Message-ID: <491A390E.2090700@noaa.gov> Simon Palmer wrote: > "Does JSON have a representation for n-d arrays? In my little work with > it, it looked pretty lame for arrays of number, so I'd be surprised." > > yes it does, thet are just treated as nested lists and the square > bracket notation is used. then it looks like one of str(array) or repr(array) would work just fine with very little massaging. A call to numpy.set_printoptions() would be a good idea first, though. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From david.froger at gmail.com Wed Nov 12 04:14:03 2008 From: david.froger at gmail.com (David Froger) Date: Wed, 12 Nov 2008 10:14:03 +0100 Subject: [Numpy-discussion] f2py call-back and newbie questions In-Reply-To: <83cb843f0811111737ncd2e718wbd124dcb2a37ce58@mail.gmail.com> References: <83cb843f0811111737ncd2e718wbd124dcb2a37ce58@mail.gmail.com> Message-ID: <9309543c0811120114l731fa93av6e3a806d605b1c2@mail.gmail.com> Hello, Here is a exemple of call-back use from Fortran to Python using f2py : http://cens.ioc.ee/projects/f2py2e/usersguide/f2py_usersguide.pdf But maybe you have already read it? http://cens.ioc.ee/projects/f2py2e/usersguide/f2py_usersguide.pdf 2008/11/12 Dave Lang > Hello, > > > I apologize for this long listing, but I have a lot of pent-up f2py queries > occupying my mind [?] > > > I realize that f2py is a non-commercial service to the community, and am > highly impressed that it exists at all. I can only offer in return my own > assistance to others (say via this list), after I have crossed some of these > bridges and can in turn assist others who venture in. > > > I am new to this list, as well as Python and am trying to use the > f2py call-back feature . I am embarking upon a rather large Fortran->Python > (80,000 line code) conversion for a dynamics simulation system ( > http://home.comcast.net/~GTOSS/ ), and > will be willing to share all knowledge of how this goes. I am converting > code by evolving from 100% F90 -> xxx % Python starting at the > "results-data-base/post-processing/interpretive-manipulation" side of the > existing code. > > > So far, I can successfully create a wrap that allows me to call a Fortran > program from Python, but can't successfully do the reverse call-back scheme. > I think I have located just about every documentation available, including > numerous versions of some, but I have questions that I can't seem to resolve > through this documentation. At the end of this request is shown the actual > code (it is very simple). > > > My system...... > > Ubuntu 8.0.4; GNU C++/g95 Compiler (current release) > > Python 2.5.2; Numpy 1.0.4; f2py 2.4422 > > > The Problem..... > > The symptom is: Depending upon how I invoke f2py to create the wrap, one > of 2 failures occur: > > > 1. At Python execution, Python insists that my arguments to the Fortran > subroutine (CBSET) are incorrect (I use 2, it insists on 1), OR, > > > 2. Python accepts my 2 arguments to the Fortran subroutine, and when I > execute Python, I end up getting a segmentation fault when I attempt to make > the call-back from Fortran. This would normally tell me that likely I have > some kind of argument issue (although the call-back routine defn itself has > NO arguments...although the "external" attribute routine name does appear as > an argument in the subroutine at the INSISTENCE of the g95 compiler). > > > General questions (maybe related to the problem, maybe not).... > > > 1. I never get a "makefile" created by running f2py. Why? > > > 2. I never get a module document (ie. "__xxx__") created by running f2py > (although I do get the ".pyf" sig-file). Why? > > > 3. Is the Python "call-back" program code supposed to reside in the > Python program that originally makes the Python -> Fortran wrap-connection > (I assumed it is)? > > > 4. Does the "XXXX.so" file need to be linked into the Fortran code via a > "makefile" operation to successfully "call-back"? > > 5. Where can I find some simple, but illustrative examples of using f2py to > "wrap a Fortran 90 data Class-definition"? I need to understand how the f2py > calls would look, and what is manifest with in the Python code to actually > lay hands on the elements of the Fortran "derived data types". > > > In general, it would be nice to find a detailed example of: a python > program, a corresponding consistent Fortran program that combines to > demonstrate the call-back scheme (and that one can type in, and will > actually function correctly), and the preferred (ie. correct?) f2py > statements that need to be executed to accomplish this (I would be glad to > contribute same to the body of knowledge, if I can ever get it to work). > > > I will be most grateful for comments on any or all of this. > > > thanks > > > Dave Lang > > > The Fortran Code: > > ! PROGAM TO DEMONSTRATE GTOSS FORTRAN -> PYTHON CONNECTION > > SUBROUTINE CBSET( N, call_back ) > > > !f2py intent(in) N > > INTEGER N > > > !f2py intent(callback) call_back > > EXTERNAL call_back > > > ! IDENTIFY THAT FORTRAN IS EXECUTING & IDENTIFY WHY CBSET IS CALLED > > PRINT *,'Enter CBSET.F, N=', N > > IF(N .EQ. 1) THEN > > PRINT *,' in CBSET: this Call originated to test call-back' > > PRINT *,' in CBSET: Make callback to PYTHON...EXPECT reply' > > CALL call_back() > > PRINT *,' in CBSET: Returned from call-back to PYTHON' > > END IF > > > > IF(N .EQ. 2) THEN > > PRINT *,' in CBSET: Call came from Python to test Py->For' > > END IF > > > RETURN > > END > > > > > *The Python Program:* > > import os > > import foo > > > def main(): > > print 'Hello, pymain is about to call CBSET to test f2py operation' > > > #satisfy arg list for cbset ????? > > dummy=call_back > > > #test "Python --> Fortran" (nominal wrap) from Python > > n = 2 > > foo.cbset(n, dummy) > > > #test "Fortran --> Python" (call-back) scheme as seen from Python > > n = 1 > > foo.cbset(n, dummy) > > > # define call-back routine > > def call_back(): > > print '...Confirming...."call_back"is entered from CBSET call' > > # end of call-back defn > > > if __name__ =='__main__': > > main() > > > NOTE: depending upon how f2py is invoked to create the wrap, a > disagreement can be experienced between Python and Fortran as to "how many > args the F90 program 'CBSET' really exhibits"; in one case, Python aborts at > execution, insisting that "CBSET has exactly 1 argument", The other type > event is that Python is happy with the code as shown, but Fortran gets a > runtime "Segmentation fault" when the "call_back" is executed. BTW, at > compile, Fortran is NEVER happy UNLESS the EXTERNAL routine named > "call_back" actually appears as an arg in the CBSET subroutine definition. > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 361.gif Type: image/gif Size: 226 bytes Desc: not available URL: From sinclaird at ukzn.ac.za Wed Nov 12 07:30:47 2008 From: sinclaird at ukzn.ac.za (Scott Sinclair) Date: Wed, 12 Nov 2008 14:30:47 +0200 Subject: [Numpy-discussion] Bugs in histogram and matplotlib-hist Message-ID: <491AE8970200009F000387AA@dbnsmtp.ukzn.ac.za> > "Mike Ressler" 11/12/08 1:19 AM > I did an update to a Fedora 9 workstation yesterday that included > updating numpy to 1.2.0 and matplotlib 0.98.3 (python version is > 2.5.1). This seems to have played havoc with some of the histogram > plotting we do. I was aware of the histogram changes in 1.2.0, but > something doesn't seem to have worked out right. Hi Mike, Someone else just posted regarding this problem on the matplotlib list http://www.nabble.com/histogram-examples--to20443846.html They reported that the Fedora 9 matplotlib package is 0.91.4, which doesn't work with numpy-1.2.0. Perhaps the matplotlib on your system isn't what you expect? It looks as if there's a problem with the Fedora packaging, the current release of matplotlib is 0.98.3. Regards, Scott Please find our Email Disclaimer here: http://www.ukzn.ac.za/disclaimer/ From rmay31 at gmail.com Wed Nov 12 11:16:35 2008 From: rmay31 at gmail.com (Ryan May) Date: Wed, 12 Nov 2008 10:16:35 -0600 Subject: [Numpy-discussion] Matlib docstring typos Message-ID: <491B0163.5050809@gmail.com> Hi, Here's a quick diff to fix some typos in the docstrings for matlib.zeros and matlib.ones. They're causing 2 (of many) failures in the doctests for me on SVN HEAD. Filed in trac as #953 (http://www.scipy.org/scipy/numpy/ticket/953) (Unless someone wants to give me SVN rights for fixing/adding small things like this.) Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From pav at iki.fi Wed Nov 12 11:33:22 2008 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 12 Nov 2008 16:33:22 +0000 (UTC) Subject: [Numpy-discussion] Matlib docstring typos References: <491B0163.5050809@gmail.com> Message-ID: Hi, Wed, 12 Nov 2008 10:16:35 -0600, Ryan May wrote: > Here's a quick diff to fix some typos in the docstrings for matlib.zeros > and matlib.ones. They're causing 2 (of many) failures in the doctests > for me on SVN HEAD. There are probably bound to be more of these. It's possible to fix them using this: http://docs.scipy.org/numpy/ http://docs.scipy.org/numpy/docs/numpy.matlib.zeros/ http://docs.scipy.org/numpy/docs/numpy.matlib.ones/ The changes will propagate from there eventually to SVN, alongside all other documentation improvements. -- Pauli Virtanen From rmay31 at gmail.com Wed Nov 12 11:57:59 2008 From: rmay31 at gmail.com (Ryan May) Date: Wed, 12 Nov 2008 10:57:59 -0600 Subject: [Numpy-discussion] Matlib docstring typos In-Reply-To: References: <491B0163.5050809@gmail.com> Message-ID: <491B0B17.8050003@gmail.com> Pauli Virtanen wrote: > Hi, > > Wed, 12 Nov 2008 10:16:35 -0600, Ryan May wrote: >> Here's a quick diff to fix some typos in the docstrings for matlib.zeros >> and matlib.ones. They're causing 2 (of many) failures in the doctests >> for me on SVN HEAD. > > There are probably bound to be more of these. It's possible to fix them > using this: > > http://docs.scipy.org/numpy/ > http://docs.scipy.org/numpy/docs/numpy.matlib.zeros/ > http://docs.scipy.org/numpy/docs/numpy.matlib.ones/ > > The changes will propagate from there eventually to SVN, alongside all > other documentation improvements. > Great, can someone get me edit access? User: rmay Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From pav at iki.fi Wed Nov 12 12:16:54 2008 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 12 Nov 2008 17:16:54 +0000 (UTC) Subject: [Numpy-discussion] Matlib docstring typos References: <491B0163.5050809@gmail.com> <491B0B17.8050003@gmail.com> Message-ID: Wed, 12 Nov 2008 10:57:59 -0600, Ryan May wrote: [clip: numpy doc editor] > Great, can someone get me edit access? Done. -- Pauli Virtanen From doutriaux1 at llnl.gov Wed Nov 12 12:43:34 2008 From: doutriaux1 at llnl.gov (=?UTF-8?Q?Charles_=D8=B3=D9=85=D9=8A=D8=B1_Doutriaux?=) Date: Wed, 12 Nov 2008 09:43:34 -0800 Subject: [Numpy-discussion] setting element Message-ID: Hello, I'm wondering if there's aquick way to do the following: s[:,5]=value in a "general" function def setval(array,index,value,axis=0): ## code here The issue is to put enough ":" before the index value inside the square bracket of the assignement. Thanks, C. From lists_ravi at lavabit.com Wed Nov 12 12:15:35 2008 From: lists_ravi at lavabit.com (Ravi) Date: Wed, 12 Nov 2008 12:15:35 -0500 Subject: [Numpy-discussion] numpy, swig and TNT-Arrays In-Reply-To: <49194E1E.9030701@ilt.fraunhofer.de> References: <49194121.6090704@ilt.fraunhofer.de> <49194E1E.9030701@ilt.fraunhofer.de> Message-ID: <200811121215.35973.lists_ravi@lavabit.com> On Tuesday 11 November 2008 04:19:26 am Rolf Wester wrote: > I'm not sure whether TNT is still actively maintained, the TNT home page > was last modified in 2004, so you are probably right. But the TNT Arrays > are just what I need and I know of no alternative. Use boost.python + boost.ublas + numpy: http://mail.python.org/pipermail/cplusplus-sig/2008-October/013825.html Regards, Ravi From mike.ressler at alum.mit.edu Wed Nov 12 13:27:02 2008 From: mike.ressler at alum.mit.edu (Mike Ressler) Date: Wed, 12 Nov 2008 10:27:02 -0800 Subject: [Numpy-discussion] Bugs in histogram and matplotlib-hist In-Reply-To: <491AE8970200009F000387AA@dbnsmtp.ukzn.ac.za> References: <491AE8970200009F000387AA@dbnsmtp.ukzn.ac.za> Message-ID: <268febdf0811121027t27ed0806tb90c8eb8c349809c@mail.gmail.com> On Wed, Nov 12, 2008 at 4:30 AM, Scott Sinclair wrote: >> "Mike Ressler" 11/12/08 1:19 AM >> I did an update to a Fedora 9 workstation yesterday that included >> updating numpy to 1.2.0 and matplotlib 0.98.3 (python version is > > They reported that the Fedora 9 matplotlib package is 0.91.4, which doesn't work with numpy-1.2.0. Perhaps the matplotlib on your system isn't what you expect? Argh! The one thing I didn't doublecheck before posting - you are correct, Scott; the Fedora box has 0.91.4; the machines I personally use more regularly have 0.98.3 on Archlinux. I'll see what can be done with package updating before I start patching. Thanks for pointing this out. Mike -- mike.ressler at alum.mit.edu From ggellner at uoguelph.ca Wed Nov 12 13:33:52 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Wed, 12 Nov 2008 13:33:52 -0500 Subject: [Numpy-discussion] setting element In-Reply-To: References: Message-ID: <20081112183352.GA31573@encolpuis> On Wed, Nov 12, 2008 at 09:43:34AM -0800, Charles ???? Doutriaux wrote: > Hello, > > I'm wondering if there's aquick way to do the following: > > s[:,5]=value > > in a "general" function > def setval(array,index,value,axis=0): > ## code here > > The issue is to put enough ":" before the index value inside the > square bracket of the assignement. > Make some slice objects! def setval(array, index, value, axis=0): key = [slice(None)]*len(array.shape) key[axis] = index array[key] = value Gabriel From rmay31 at gmail.com Wed Nov 12 13:34:51 2008 From: rmay31 at gmail.com (Ryan May) Date: Wed, 12 Nov 2008 12:34:51 -0600 Subject: [Numpy-discussion] setting element In-Reply-To: References: Message-ID: <491B21CB.5020208@gmail.com> Charles ???? Doutriaux wrote: > Hello, > > I'm wondering if there's aquick way to do the following: > > s[:,5]=value > > in a "general" function > def setval(array,index,value,axis=0): > ## code here Assuming that axis specifies where the index goes, that would be: def setval(array, index, value, axis=0): slices = [slice(None)] * len(array.shape) slices[axis] = index array[slices] = value (Adapted from the code for numpy.diff) Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From ggellner at uoguelph.ca Wed Nov 12 13:36:16 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Wed, 12 Nov 2008 13:36:16 -0500 Subject: [Numpy-discussion] setting element In-Reply-To: <491B21CB.5020208@gmail.com> References: <491B21CB.5020208@gmail.com> Message-ID: <20081112183616.GA31700@encolpuis> On Wed, Nov 12, 2008 at 12:34:51PM -0600, Ryan May wrote: > Charles ???? Doutriaux wrote: > > Hello, > > > > I'm wondering if there's aquick way to do the following: > > > > s[:,5]=value > > > > in a "general" function > > def setval(array,index,value,axis=0): > > ## code here > > Assuming that axis specifies where the index goes, that would be: > > def setval(array, index, value, axis=0): > slices = [slice(None)] * len(array.shape) > slices[axis] = index > array[slices] = value > > (Adapted from the code for numpy.diff) > > Ryan > Jinx! From ggellner at uoguelph.ca Wed Nov 12 13:47:33 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Wed, 12 Nov 2008 13:47:33 -0500 Subject: [Numpy-discussion] by axis iterator Message-ID: <20081112184733.GA31755@encolpuis> Something I use a lot is a little generator that iterates over a ndarray by a given axis. I was wondering if this is already built-in to numpy (and not using the apply_along_axis which I find ugly) and if not would there be interest in adding it? the function is just: def by_axis(nobj, axis=0): index_set = [slice(None)]*len(ndobj.shape) for i in xrange(ndobj.shape[axis]): index_set[axis] = i yield ndobj[index_set] and can be just like >>> [sum(x) for x in by_axis(a, 1)] >>> for col in by_axis(a, 1): ... print col I use it when porting R code that uses a lot of apply like logic. I know most numpy functions have the axis argument built in, but when writing my own functions I find this a real time saver. Anyway, if someone can show be a better way I would be overjoyed, or if people like this I can make a ticket on Trac. Gabriel From doutriaux1 at llnl.gov Wed Nov 12 13:50:23 2008 From: doutriaux1 at llnl.gov (=?UTF-8?Q?Charles_=D8=B3=D9=85=D9=8A=D8=B1_Doutriaux?=) Date: Wed, 12 Nov 2008 10:50:23 -0800 Subject: [Numpy-discussion] setting element In-Reply-To: <20081112183616.GA31700@encolpuis> References: <491B21CB.5020208@gmail.com> <20081112183616.GA31700@encolpuis> Message-ID: <61E4D3DC-E517-4388-A015-8DB8AD89B69A@llnl.gov> Thx! On Nov 12, 2008, at 10:36 AM, Gabriel Gellner wrote: > On Wed, Nov 12, 2008 at 12:34:51PM -0600, Ryan May wrote: >> Charles ???? Doutriaux wrote: >>> Hello, >>> >>> I'm wondering if there's aquick way to do the following: >>> >>> s[:,5]=value >>> >>> in a "general" function >>> def setval(array,index,value,axis=0): >>> ## code here >> >> Assuming that axis specifies where the index goes, that would be: >> >> def setval(array, index, value, axis=0): >> slices = [slice(None)] * len(array.shape) >> slices[axis] = index >> array[slices] = value >> >> (Adapted from the code for numpy.diff) >> >> Ryan >> > Jinx! > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http:// projects.scipy.org/mailman/listinfo/numpy-discussion From roberto at dealmeida.net Wed Nov 12 13:43:22 2008 From: roberto at dealmeida.net (Roberto De Almeida) Date: Wed, 12 Nov 2008 16:43:22 -0200 Subject: [Numpy-discussion] setting element In-Reply-To: <20081112183616.GA31700@encolpuis> References: <491B21CB.5020208@gmail.com> <20081112183616.GA31700@encolpuis> Message-ID: <10c662fe0811121043n3c50edfdiaca6801d0decb3a4@mail.gmail.com> On Wed, Nov 12, 2008 at 4:36 PM, Gabriel Gellner wrote: > On Wed, Nov 12, 2008 at 12:34:51PM -0600, Ryan May wrote: > > Charles ???? Doutriaux wrote: > > > Hello, > > > > > > I'm wondering if there's aquick way to do the following: > > > > > > s[:,5]=value > > > > > > in a "general" function > > > def setval(array,index,value,axis=0): > > > ## code here > > > > Assuming that axis specifies where the index goes, that would be: > > > > def setval(array, index, value, axis=0): > > slices = [slice(None)] * len(array.shape) > > slices[axis] = index > > array[slices] = value > > > > (Adapted from the code for numpy.diff) > > > > Ryan > > > Jinx! > Shouldn't s[...,index] = value work too? -------------- next part -------------- An HTML attachment was scrubbed... URL: From doutriaux1 at llnl.gov Wed Nov 12 14:07:37 2008 From: doutriaux1 at llnl.gov (=?UTF-8?Q?Charles_=D8=B3=D9=85=D9=8A=D8=B1_Doutriaux?=) Date: Wed, 12 Nov 2008 11:07:37 -0800 Subject: [Numpy-discussion] setting element In-Reply-To: <10c662fe0811121043n3c50edfdiaca6801d0decb3a4@mail.gmail.com> References: <491B21CB.5020208@gmail.com> <20081112183616.GA31700@encolpuis> <10c662fe0811121043n3c50edfdiaca6801d0decb3a4@mail.gmail.com> Message-ID: <29EC949B-FED3-4AC3-B034-431B8D941805@llnl.gov> Nope this one wouldn't have worked for me, it's basically axis=-1 but there might be additional dimensions after index C. On Nov 12, 2008, at 10:43 AM, Roberto De Almeida wrote: > On Wed, Nov 12, 2008 at 4:36 PM, Gabriel Gellner > wrote: > On Wed, Nov 12, 2008 at 12:34:51PM -0600, Ryan May wrote: > > Charles ???? Doutriaux wrote: > > > Hello, > > > > > > I'm wondering if there's aquick way to do the following: > > > > > > s[:,5]=value > > > > > > in a "general" function > > > def setval(array,index,value,axis=0): > > > ## code here > > > > Assuming that axis specifies where the index goes, that would be: > > > > def setval(array, index, value, axis=0): > > slices = [slice(None)] * len(array.shape) > > slices[axis] = index > > array[slices] = value > > > > (Adapted from the code for numpy.diff) > > > > Ryan > > > Jinx! > > Shouldn't > > s[...,index] = value > > work too? > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http:// projects.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Wed Nov 12 15:58:45 2008 From: david.huard at gmail.com (David Huard) Date: Wed, 12 Nov 2008 15:58:45 -0500 Subject: [Numpy-discussion] Changes to histogram semantics: follow-up Message-ID: <91cf711d0811121258p2d198091lf98bfd0d133b685f@mail.gmail.com> NumPy users, Revision 6020 proceeds with the planned changes to histogram semantics for the 1.3 release. This modification brings no change in functionality, only changes in the warnings being raised: No warning is printed for the default behaviour (new=None). new=False now raises a DeprecationWarning. Users relying on the old behaviour are encouraged to switch to the new semantics. new=True warns users that the `new` keyword will disappear in 1.4 Regards, David Huard -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Wed Nov 12 16:32:44 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 12 Nov 2008 16:32:44 -0500 Subject: [Numpy-discussion] Segfault with dotblas on OS X 10.5.5/PPC (but not on Intel?) Message-ID: <05B4F711-E47A-497D-9A5B-D748B3B9CBA2@cs.toronto.edu> Hello folks, I'm doing some rather big matrix products on a G5, and ran into this. Strangely on the same OS version on my Intel laptop, this isn't an issue. Available memory isn't the problem either, I don't think, this machine is pretty beefy. I'm running the python.org 2.5.2 build of Python, and the latest SVN build of numpy (though the same thing happened with 1.1.0). Here's the debug info that OS X gives me: Process: Python [84664] Path: /Library/Frameworks/Python.framework/Versions/2.5/ Resources/Python.app/Contents/MacOS/Python Identifier: Python Version: ??? (???) Code Type: PPC (Native) Parent Process: bash [79733] Date/Time: 2008-11-12 16:21:39.983 -0500 OS Version: Mac OS X 10.5.5 (9F33) Report Version: 6 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x00000000129f0ca0 Crashed Thread: 2 Thread 0: 0 libSystem.B.dylib 0x9267ae4c __semwait_signal + 12 1 libBLAS.dylib 0x936fd58c ATL_join_tree + 28 2 libBLAS.dylib 0x936f21e8 ATL_dptgemm + 296 3 _dotblas.so 0x00fd67f0 dotblas_matrixproduct + 5040 (_dotblas.c:751) ... ... Thread 1: 0 libSystem.B.dylib 0x926d8ae4 select$DARWIN_EXTSN + 12 1 com.tcltk.tcllibrary 0x020e8de8 NotifierThreadProc + 448 Thread 2 Crashed: 0 libBLAS.dylib 0x93192d40 ATL_dgezero + 96 1 libBLAS.dylib 0x93301984 ATL_dNCmmIJK + 1684 2 libBLAS.dylib 0x932ffcb4 ATL_dgemm + 708 3 libBLAS.dylib 0x936f1dfc ATL_dptgemm0 + 108 4 libSystem.B.dylib 0x926b6658 _pthread_start + 316 Thread 3: 0 libBLAS.dylib 0x931b9fa8 ATL_dJIK0x0x0NN0x0x0_aX_bX + 2328 1 libBLAS.dylib 0x93302314 ATL_dNCmmJIK + 1652 2 libBLAS.dylib 0x932ffcb4 ATL_dgemm + 708 3 libBLAS.dylib 0x936f1dfc ATL_dptgemm0 + 108 4 libSystem.B.dylib 0x926b6658 _pthread_start + 316 Thread 4: 0 libBLAS.dylib 0x93192d70 ATL_dgezero + 144 1 libBLAS.dylib 0x93302314 ATL_dNCmmJIK + 1652 2 libBLAS.dylib 0x932ffcb4 ATL_dgemm + 708 3 libBLAS.dylib 0x936f1dfc ATL_dptgemm0 + 108 4 libSystem.B.dylib 0x926b6658 _pthread_start + 316 Thread 5: 0 libBLAS.dylib 0x93307440 ATL_drow2blkT_KB_a1 + 80 1 libBLAS.dylib 0x93307754 ATL_drow2blkT2_a1 + 324 2 libBLAS.dylib 0x93305928 ATL_dmmJIK + 2200 3 libBLAS.dylib 0x932ff2a8 ATLU_dusergemm + 104 4 libBLAS.dylib 0x932ffcb4 ATL_dgemm + 708 5 libBLAS.dylib 0x936f1dfc ATL_dptgemm0 + 108 6 libSystem.B.dylib 0x926b6658 _pthread_start + 316 From david.huard at gmail.com Wed Nov 12 17:09:02 2008 From: david.huard at gmail.com (David Huard) Date: Wed, 12 Nov 2008 17:09:02 -0500 Subject: [Numpy-discussion] Bugs in histogram and matplotlib-hist In-Reply-To: <268febdf0811121027t27ed0806tb90c8eb8c349809c@mail.gmail.com> References: <491AE8970200009F000387AA@dbnsmtp.ukzn.ac.za> <268febdf0811121027t27ed0806tb90c8eb8c349809c@mail.gmail.com> Message-ID: <91cf711d0811121409r77fa435dh2ad88fff4c9f2b66@mail.gmail.com> On Wed, Nov 12, 2008 at 1:27 PM, Mike Ressler wrote: > On Wed, Nov 12, 2008 at 4:30 AM, Scott Sinclair > wrote: > >> "Mike Ressler" 11/12/08 1:19 AM > >> I did an update to a Fedora 9 workstation yesterday that included > >> updating numpy to 1.2.0 and matplotlib 0.98.3 (python version is > > > > They reported that the Fedora 9 matplotlib package is 0.91.4, which > doesn't work with numpy-1.2.0. Perhaps the matplotlib on your system isn't > what you expect? > > Argh! The one thing I didn't doublecheck before posting - you are > correct, Scott; the Fedora box has 0.91.4; the machines I personally > use more regularly have 0.98.3 on Archlinux. I'll see what can be done > with package updating before I start patching. Thanks for pointing > this out. Mike, before patching, please take a look at the tickets related to histogram on the numpy trac. Previously, histogram returned only used the left bin edges and it caused a lot of problems with outliers and normalization. We are not going back there. Cheers, David > > > Mike > > -- > mike.ressler at alum.mit.edu > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.ressler at alum.mit.edu Wed Nov 12 17:47:41 2008 From: mike.ressler at alum.mit.edu (Mike Ressler) Date: Wed, 12 Nov 2008 14:47:41 -0800 Subject: [Numpy-discussion] Solved Re: Bugs in histogram and matplotlib-hist Message-ID: <268febdf0811121447k620e4344g6b97d042076c3c20@mail.gmail.com> On Wed, Nov 12, 2008 at 2:09 PM, David Huard wrote: > > Mike, before patching, please take a look at the tickets related to > histogram on the numpy trac. Previously, histogram returned only used the > left bin edges and it caused a lot of problems with outliers and > normalization. We are not going back there. Our immediate problem has been solved by compiling matplotlib-0.98.3 by hand on Fedora 9 - in spite of its bleeding edge reputation, Fedora can get behind sometimes. The problem with histogram I reported also seems to have disappeared after completely uninstalling and reinstalling numpy and all its brethren. Something seems to have been weird about the recent Fedora updates. I did not mean to suggest any changes to the behavior of histogram, only to get it working on our system. Anyway, problem solved, we're happy. Sorry for the noise. (And thanks again Scott for the Fedora problem link.) Mike -- mike.ressler at alum.mit.edu From michael.abshoff at googlemail.com Wed Nov 12 18:05:50 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Wed, 12 Nov 2008 15:05:50 -0800 Subject: [Numpy-discussion] Segfault with dotblas on OS X 10.5.5/PPC (but not on Intel?) In-Reply-To: <05B4F711-E47A-497D-9A5B-D748B3B9CBA2@cs.toronto.edu> References: <05B4F711-E47A-497D-9A5B-D748B3B9CBA2@cs.toronto.edu> Message-ID: <491B614E.8000706@gmail.com> David Warde-Farley wrote: > Hello folks, Hi David, > I'm doing some rather big matrix products on a G5, and ran into this. > Strangely on the same OS version on my Intel laptop, this isn't an > issue. Available memory isn't the problem either, I don't think, this > machine is pretty beefy. Can you define that? People's definition vary :) > I'm running the python.org 2.5.2 build of Python, and the latest SVN > build of numpy (though the same thing happened with 1.1.0). IIRC that is a universal build for 32 bit PPC and Intel, so depending on the problem size 32 bits might be insufficient. Cheers, Michael From millman at berkeley.edu Wed Nov 12 18:16:54 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 12 Nov 2008 15:16:54 -0800 Subject: [Numpy-discussion] Changes to histogram semantics: follow-up In-Reply-To: <91cf711d0811121258p2d198091lf98bfd0d133b685f@mail.gmail.com> References: <91cf711d0811121258p2d198091lf98bfd0d133b685f@mail.gmail.com> Message-ID: On Wed, Nov 12, 2008 at 12:58 PM, David Huard wrote: > Revision 6020 proceeds with the planned changes to histogram semantics for > the 1.3 release. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From Catherine.M.Moroney at jpl.nasa.gov Wed Nov 12 18:40:03 2008 From: Catherine.M.Moroney at jpl.nasa.gov (Catherine Moroney) Date: Wed, 12 Nov 2008 15:40:03 -0800 Subject: [Numpy-discussion] f2py questions Message-ID: Hello, I'm not sure if this is the right list for this question, but attempts to subscribe myself to f2py-users fail, as I never get an email back asking me to confirm my subscription request. Is that list no longer active? I'm trying to compile a medium-short Fortran90 code using f2py, and get the error below. The code compiles fine under the gfortran compiler, which I'm specifying on the f2py command-line. I'm using numpy 1.2.1, and python 2.5 on a 64-bit Linux AMD box, with the command: >>f2py --verbose --f90exec='/usr/bin/gfortran' --f90flags='--fixed- line-length-none' \ -c -m M23mod M23match.f File "/usr/lib64/python2.5/site-packages/numpy/f2py/crackfortran.py", line 419, in readfortrancode dowithline(finalline) File "/usr/lib64/python2.5/site-packages/numpy/f2py/ crackfortran.py", line 601, in crackline % (groupcounter) crackline: groupcounter(=0) is nonpositive. Check the blocks. Can somebody point me to the right place to ask this question, and/or give me some hints about tracking down this error. Is there someway to get f2py to give me the line number of the problematic statement in the Fortran code? Catherine From dave.seattledl at gmail.com Wed Nov 12 19:06:44 2008 From: dave.seattledl at gmail.com (Dave Lang) Date: Wed, 12 Nov 2008 16:06:44 -0800 Subject: [Numpy-discussion] f2py: Can anyone tell me why this simple call-back example fails? Message-ID: <83cb843f0811121606g44a05bfbm192d4774fb014efd@mail.gmail.com> My system...... Ubuntu v8.0.4; Gnu gcc v4.0.3; g95 v0.92 Python v2.5.2; Numpy v1.2.1; f2py v2.5972 Hope someone can see what's wrong here? thanks Dave Lang ..............Here is the F90 code........... ! PROGRAM TO DEMONSTRATE PYTHON-->F90 AND F90-->PYTHON INTERFACE SUBROUTINE CBSET(N, call_back) !f2py intent(in) N !f2py intent(callback) call_back EXTERNAL call_back INTEGER N, m ! IDENTIFY THAT FORTRAN IS EXECUTING & WHO CALLED CBSET PRINT *,'Enter CBSET.F, N=',N IF(N .EQ. 1) THEN PRINT *,' in CBSET: This Call originated to test call-back' PRINT *,' in CBSET: Make callback to PYTHON...EXPECT reply' m=1 CALL call_back(m) PRINT *,' in CBSET: Returned from call-back to PYTHON' END IF IF(N .EQ. 2) THEN PRINT *,' in CBSET: Call came from Python to test Py->For' END IF RETURN END ..............Here is the f2py created Signature code........... ! -*- f90 -*- ! Note: the context of this file is case sensitive. python module cbset__user__routines interface cbset_user_interface subroutine call_back(m) ! in :foo:CBSET.F90:cbset:unknown_interface intent(callback) call_back integer :: m end subroutine call_back end interface cbset_user_interface end python module cbset__user__routines python module foo ! in interface ! in :foo subroutine cbset(n) ! in :foo:CBSET.F90 use cbset__user__routines integer intent(in) :: n intent(callback) call_back external call_back end subroutine cbset end interface end python module foo ! This file was auto-generated with f2py (version:2_5972). ! See http://cens.ioc.ee/projects/f2py2e/ ..............Here is the Python Main Program code........... # Main Python program to test nominal "wrap" and "call back" import foo def main(): print 'HELLO, pymain is about to call CBSET to test f2py operation' print 'BUT, 1st test call_back subr from within python itself' call_back(979) #test "Python --> FORTRAN" (nominal wrap) from Python print 'NOW, call CBSET to test Python --> FORTRAN wrap' n = 2 foo.cbset(n, call_back) #test "FORTRAN --> Python" (call-back) scheme as seen from Python print 'NOW, call CBSET to test FORTRAN --> Python call-back' n = 1 foo.cbset(n, call_back) # define call-back routine def call_back(m): m +=20 print '..Confirming call_back, Python is entered from Fortran' print '..in call_back: m = ', m return # end of call-back defn if __name__ =='__main__': main() ..............Here is the execution result........... david at david-desktop:~/Pyth/ACB$ python pymain.py HELLO, pymain is about to call CBSET to test f2py operation BUT, 1st test call_back subr from within python itself ..Confirming call_back, Python is entered from Fortran ..in call_back: m = 999 NOW, call CBSET to test Python --> FORTRAN wrap Enter CBSET.F, N= 2 in CBSET: Call came from Python to test Py->For NOW, call CBSET to test FORTRAN --> Python call-back Enter CBSET.F, N= 1 in CBSET: this Call originated to test call-back in CBSET: Make callback to PYTHON...EXPECT reply Segmentation fault -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Wed Nov 12 19:24:21 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 12 Nov 2008 19:24:21 -0500 Subject: [Numpy-discussion] Segfault with dotblas on OS X 10.5.5/PPC (but not on Intel?) In-Reply-To: <491B614E.8000706@gmail.com> References: <05B4F711-E47A-497D-9A5B-D748B3B9CBA2@cs.toronto.edu> <491B614E.8000706@gmail.com> Message-ID: <7F708347-8E5C-45DD-87CF-F58661A0A395@cs.toronto.edu> On 12-Nov-08, at 6:05 PM, Michael Abshoff wrote: >> I'm running the python.org 2.5.2 build of Python, and the latest SVN >> build of numpy (though the same thing happened with 1.1.0). > > IIRC that is a universal build for 32 bit PPC and Intel, so > depending on > the problem size 32 bits might be insufficient. Indeed, for the size of problem I *thought* I was running, 32 bit would be sufficient. In fact I had my data transposed and so was working with a much larger matrix which would put me past the 32-bit bound. My apologies, David From cournapeau at cslab.kecl.ntt.co.jp Wed Nov 12 20:18:13 2008 From: cournapeau at cslab.kecl.ntt.co.jp (David Cournapeau) Date: Thu, 13 Nov 2008 10:18:13 +0900 Subject: [Numpy-discussion] Segfault with dotblas on OS X 10.5.5/PPC (but not on Intel?) In-Reply-To: <7F708347-8E5C-45DD-87CF-F58661A0A395@cs.toronto.edu> References: <05B4F711-E47A-497D-9A5B-D748B3B9CBA2@cs.toronto.edu> <491B614E.8000706@gmail.com> <7F708347-8E5C-45DD-87CF-F58661A0A395@cs.toronto.edu> Message-ID: <1226539093.12236.1.camel@bbc8> On Wed, 2008-11-12 at 19:24 -0500, David Warde-Farley wrote: > > Indeed, for the size of problem I *thought* I was running, 32 bit > would be sufficient. In fact I had my data transposed and so was > working with a much larger matrix which would put me past the 32-bit > bound. Still, ideally, it should not segfault. Can you easily reproduce the segfault ? David From matthieu.brucher at gmail.com Thu Nov 13 05:39:42 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 13 Nov 2008 11:39:42 +0100 Subject: [Numpy-discussion] Numpy and MKL, update Message-ID: Hi, I got an answer from the Intel Support about the issues between MKL and Numpy (and Matlab and ...). To use MKL with Numpy, we must know explicitely use the static MKL libraries (libmkl_intel_lp64.a, libmkl_intel_thread.a and libmkl_core.a). The same applies for Scipy and every other Python module you build with the MKL. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From dwf at cs.toronto.edu Thu Nov 13 07:46:17 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 13 Nov 2008 07:46:17 -0500 Subject: [Numpy-discussion] Segfault with dotblas on OS X 10.5.5/PPC (but not on Intel?) In-Reply-To: <1226539093.12236.1.camel@bbc8> References: <05B4F711-E47A-497D-9A5B-D748B3B9CBA2@cs.toronto.edu> <491B614E.8000706@gmail.com> <7F708347-8E5C-45DD-87CF-F58661A0A395@cs.toronto.edu> <1226539093.12236.1.camel@bbc8> Message-ID: On 12-Nov-08, at 8:18 PM, David Cournapeau wrote: > On Wed, 2008-11-12 at 19:24 -0500, David Warde-Farley wrote: > >> >> Indeed, for the size of problem I *thought* I was running, 32 bit >> would be sufficient. In fact I had my data transposed and so was >> working with a much larger matrix which would put me past the 32-bit >> bound. > > Still, ideally, it should not segfault. Can you easily reproduce the > segfault ? Odd, I went off to dinner and came to the exact same conclusion in my head. "Wait a second, why doesn't it raise a MemoryError?" I can try and isolate the problem later today into a simple code snippet, but it was definitely reproducible, like clockwork. David From nwagner at iam.uni-stuttgart.de Thu Nov 13 14:31:05 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 13 Nov 2008 20:31:05 +0100 Subject: [Numpy-discussion] numpy-docs and sphinx In-Reply-To: References: Message-ID: On Mon, 10 Nov 2008 08:42:16 +0100 "Nils Wagner" wrote: > Hi all, > > I tried to build the NumPy Reference Guide. > > svn/numpy-docs > make html > mkdir -p build > ./ext/autosummary_generate.py source/reference/*.rst \ > -p dump.xml -o source/reference/generated > Traceback (most recent call last): > File "./ext/autosummary_generate.py", line 18, in ? > from autosummary import import_by_name > File > "/data/home/nwagner/svn/numpy-docs/ext/autosummary.py", > line 59, in ? > import sphinx.addnodes, sphinx.roles, sphinx.builder > File > "/data/home/nwagner/local/lib/python2.5/site-packages/Sphinx-0.5dev_20081110-py2.5.egg/sphinx/__init__.py", > line 70 > '-c' not in (opt[0] for opt in opts): > ^ > SyntaxError: invalid syntax > make: *** [build/generate-stamp] Fehler 1 > > How can I fix that problem ? > > Nils > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion make html in numpy-docs works for me now, but make html in scipy-docs failed WARNING: /home/nwagner/svn/scipy-docs/source/stats.rst:206: (WARNING/2) toctree references unknown document u'generated/meanwhitneyu' WARNING: /home/nwagner/svn/scipy-docs/source/stats.rst:: (WARNING/2) failed to import anova WARNING: /home/nwagner/svn/scipy-docs/source/stats.rst:236: (WARNING/2) toctree references unknown document u'generated/anova' pickling environment... done checking consistency... WARNING: /home/nwagner/svn/scipy-docs/source/spatial.distance.rst:: document isn't included in any toctree done preparing documents... done writing output... cluster cluster.hierarchy Math extension error: latex exited with error: [stderr] [stdout] This is pdfeTeX, Version 3.141592-1.21a-2.2 (Web2C 7.5.4) entering extended mode (./math.tex LaTeX2e <2003/12/01> Babel and hyphenation patterns for american, french, german, ngerman, b ahasa, basque, bulgarian, catalan, croatian, czech, danish, dutch, esperanto, e stonian, finnish, greek, icelandic, irish, italian, latin, magyar, norsk, polis h, portuges, romanian, russian, serbian, slovak, slovene, spanish, swedish, tur kish, ukrainian, nohyphenation, loaded. (/usr/share/texmf/tex/latex/base/article.cls Document Class: article 2004/02/16 v1.4f Standard LaTeX document class (/usr/share/texmf/tex/latex/base/size12.clo)) (/usr/share/texmf/tex/latex/base/inputenc.sty (/usr/share/texmf/tex/latex/base/utf8.def (/usr/share/texmf/tex/latex/base/t1enc.dfu) (/usr/share/texmf/tex/latex/base/ot1enc.dfu) (/usr/share/texmf/tex/latex/base/omsenc.dfu))) (/usr/share/texmf/tex/latex/amsmath/amsmath.sty For additional information on amsmath, use the `?' option. (/usr/share/texmf/tex/latex/amsmath/amstext.sty (/usr/share/texmf/tex/latex/amsmath/amsgen.sty)) (/usr/share/texmf/tex/latex/amsmath/amsbsy.sty) (/usr/share/texmf/tex/latex/amsmath/amsopn.sty)) (/usr/share/texmf/tex/latex/amscls/amsthm.sty) (/usr/share/texmf/tex/latex/amsfonts/amssymb.sty (/usr/share/texmf/tex/latex/amsfonts/amsfonts.sty)) (/usr/share/texmf/tex/latex/tools/bm.sty) (/usr/share/texmf/tex/latex/preview/preview.sty No auxiliary output files. ) No file math.aux. Preview: Fontsize 12pt (/usr/share/texmf/tex/latex/amsfonts/umsa.fd) (/usr/share/texmf/tex/latex/amsfonts/umsb.fd) ! Display math should end with $$. ` l.14 $$ij$` th entry is the cophenetic distance between ! Missing $ inserted. $ l.16 \end{preview} [1] ) (see the transcript file for additional information) Output written on math.dvi (1 page, 332 bytes). Transcript written on math.log. make: *** [html] Fehler 1 Nils From f.yw at hotmail.com Thu Nov 13 15:20:55 2008 From: f.yw at hotmail.com (frank wang) Date: Thu, 13 Nov 2008 13:20:55 -0700 Subject: [Numpy-discussion] Numpy Example List With Doc cannot be printed In-Reply-To: References: Message-ID: Hi, >From www.scipy.org web site, I tried to print the "Numpy Example List with Doc". Even though I can read the document on the computer, but when I printed, except of the first few pages, all pages printed empty. Does anyone know the reason? Thanks Frank _________________________________________________________________ Stay up to date on your PC, the Web, and your mobile phone with Windows Live http://clk.atdmt.com/MRT/go/119462413/direct/01/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From f.yw at hotmail.com Thu Nov 13 15:23:44 2008 From: f.yw at hotmail.com (frank wang) Date: Thu, 13 Nov 2008 13:23:44 -0700 Subject: [Numpy-discussion] Numpy and MKL, update In-Reply-To: References: Message-ID: Hi, Can you provide a working example to build Numpy with MKL in window and linux? The reason I am thinking to build the system is that I need to make the speed match with matlab. Thanks frank> Date: Thu, 13 Nov 2008 11:39:42 +0100> From: matthieu.brucher at gmail.com> To: numpy-discussion at scipy.org> Subject: [Numpy-discussion] Numpy and MKL, update> > Hi,> > I got an answer from the Intel Support about the issues between MKL> and Numpy (and Matlab and ...).> To use MKL with Numpy, we must know explicitely use the static MKL> libraries (libmkl_intel_lp64.a, libmkl_intel_thread.a and> libmkl_core.a). The same applies for Scipy and every other Python> module you build with the MKL.> > Matthieu> -- > Information System Engineer, Ph.D.> Website: http://matthieu-brucher.developpez.com/> Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92> LinkedIn: http://www.linkedin.com/in/matthieubrucher> _______________________________________________> Numpy-discussion mailing list> Numpy-discussion at scipy.org> http://projects.scipy.org/mailman/listinfo/numpy-discussion _________________________________________________________________ See how Windows? connects the people, information, and fun that are part of your life http://clk.atdmt.com/MRT/go/119463819/direct/01/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Thu Nov 13 15:35:00 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 13 Nov 2008 21:35:00 +0100 Subject: [Numpy-discussion] Numpy and MKL, update In-Reply-To: References: Message-ID: 2008/11/13 frank wang : > Hi, > > Can you provide a working example to build Numpy with MKL in window and > linux? > The reason I am thinking to build the system is that I need to make the > speed match with matlab. > > Thanks > > frank Hi, I don't know how to do that ;) The Intel Support is investigating this as well. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From pav at iki.fi Thu Nov 13 17:25:05 2008 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 13 Nov 2008 22:25:05 +0000 (UTC) Subject: [Numpy-discussion] numpy-docs and sphinx References: Message-ID: Thu, 13 Nov 2008 20:31:05 +0100, Nils Wagner wrote: [clip] > make html in numpy-docs works for me now, but make html in scipy-docs > failed Yep, there was bad Latex in the docstrings of cluster.hierarchy, which made sphinx.ext.pngmath to crash. I've fixed the docstrings now. Pauli From rob.clewley at gmail.com Thu Nov 13 17:53:42 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Thu, 13 Nov 2008 17:53:42 -0500 Subject: [Numpy-discussion] ANN: PyDSTool 0.87 released Message-ID: Dear Scipy and Numpy user lists, The latest update to the open-source python dynamical systems modeling toolbox, PyDSTool 0.87, has been released on Sourceforge. http://www.sourceforge.net/projects/pydstool/ Major highlights are: * Implemented a more natural hybrid model specification format * Supports quadratic interpolation of data points in Trajectory objects (courtesy of Anne Archibald's poly_int class) * Supports more sophisticated data-driven model inference * Improved efficiency of ODE solvers * Various bug fixes and other API improvements * New demonstration scripts and more commenting for existing scripts in PyDSTool/tests/ * New wiki tutorial (courtesy of Daniel Mart?) This is a modest update in preparation for a substantial upgrade at version 0.90, which will move symbolic expression support over to SymPy, and greatly improve the implementation of C-based ODE integrators. We are also trying to incorporate basic boundary-value problem solving, and we aim to further improve the parameter estimation / model inference tools to work effectively with OpenOpt. For installation and setting up, see the GettingStarted page at our wiki, http://pydstool.sourceforge.net The download contains full API documentation, BSD license information, and further details of recent code changes. Further documentation is on the wiki. As ever, all feedback is welcome as we try to find time to improve our code base. If you would like to contribute effort in improving the tutorial and wiki documentation, or to the code itself, please contact me. -Rob Clewley From ezindy at gmail.com Thu Nov 13 18:32:17 2008 From: ezindy at gmail.com (Egor Zindy) Date: Thu, 13 Nov 2008 23:32:17 +0000 Subject: [Numpy-discussion] ANN: I wrote some Numpy + SWIG + MinGW simple examples Message-ID: <491CB901.3060604@gmail.com> Hello list! To get my head round the numpy.i interface for SWIG, I wrote some simple examples and documented them as much as possible. The result is here: http://code.google.com/p/ezwidgets/wiki/NumpySWIGMinGW I finally got round testing ARGOUTVIEW_ARRAY1 today, so it's time to ask for some feedback. Questions I still have: * Any way of doing array_out = function(array_in) without using ARGOUTVIEW_ARRAY1? * Any clean way of generating an exception on failed memory allocations? Hope this helps someone else... and thank you Bill Spotz for your original article and comments! Regards, Egor From dagss at student.matnat.uio.no Thu Nov 13 19:25:32 2008 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Fri, 14 Nov 2008 01:25:32 +0100 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: <5b8d13220811100238v1fb48d6eq68a0c7954f3ced8f@mail.gmail.com> References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> <49177270.5020405@student.matnat.uio.no> <5b8d13220811091938m59c07b4l49e9db36b3c19bb6@mail.gmail.com> <5b8d13220811100238v1fb48d6eq68a0c7954f3ced8f@mail.gmail.com> Message-ID: <491CC57C.2020705@student.matnat.uio.no> For mailing list archival purposes, I'm posting the conclusion of this story: An update to Cython which also works with NumPy/Python2.4/64-bit can now be retrieved from http://hg.cython.org/cython That Mercurial repo contains previous release + bugfixes deemed to be very safe, so I recommend using this instead for SciPy etc. until the next Cython release. Dag Sverre David Cournapeau wrote: > On Mon, Nov 10, 2008 at 12:38 PM, David Cournapeau wrote: > >> This sounds like the right solution, but OTOH, since I am not familiar >> with the cython codebase at all, I will prepare a patch following the >> first solution, and I guess you can always improve it later. > > Ok, I made a patch which makes strides/shape to be Py_ssize_t arrays > instead of npy_intp, and create temporary buffers when > sizeof(Py_ssize_t) != sizeof(npy_intp_t). I have not tested it > thouroughly, but it made the tests which failed work on python 2.4 on > RHEL 64 bits. > > Can I have an account to put the mercurial bundle somewhere ? > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion -- Dag Sverre From cournape at gmail.com Thu Nov 13 20:47:03 2008 From: cournape at gmail.com (David Cournapeau) Date: Fri, 14 Nov 2008 10:47:03 +0900 Subject: [Numpy-discussion] Numpy and MKL, update In-Reply-To: References: Message-ID: <5b8d13220811131747i7c51173ch2587cd4aefe581b8@mail.gmail.com> On Fri, Nov 14, 2008 at 5:23 AM, frank wang wrote: > Hi, > > Can you provide a working example to build Numpy with MKL in window and > linux? > The reason I am thinking to build the system is that I need to make the > speed match with matlab. The MKL will only help you for linear algebra, and more specifically for big matrices. If you build your own atlas, you can easily match matlab speed in that area, I think. David From dwf at cs.toronto.edu Thu Nov 13 20:58:32 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 13 Nov 2008 20:58:32 -0500 Subject: [Numpy-discussion] Numpy and MKL, update In-Reply-To: <5b8d13220811131747i7c51173ch2587cd4aefe581b8@mail.gmail.com> References: <5b8d13220811131747i7c51173ch2587cd4aefe581b8@mail.gmail.com> Message-ID: On 13-Nov-08, at 8:47 PM, David Cournapeau wrote: > On Fri, Nov 14, 2008 at 5:23 AM, frank wang wrote: >> Hi, >> >> Can you provide a working example to build Numpy with MKL in window >> and >> linux? >> The reason I am thinking to build the system is that I need to make >> the >> speed match with matlab. > > The MKL will only help you for linear algebra, and more specifically > for big matrices. If you build your own atlas, you can easily match > matlab speed in that area, I think. Also make sure that you have a compiler suitable for building ATLAS. Basically, if your gcc is older than 4.2, *build a newer gcc and gfortran first*, then build ATLAS with that. From http://math-atlas.sourceforge.net/errata.html "The ATLAS architectural defaults were all build with gcc 4.2, except on IRIX/MIPS, where gcc was outperformed by SGI's cc. If you use gcc 4.0 or 4.1, then your performance will be cut roughly in half on all x86 platforms, so such users should install and use gcc 4.2." Kind of a big deal. ;) - David From michael.abshoff at googlemail.com Thu Nov 13 21:07:07 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Thu, 13 Nov 2008 18:07:07 -0800 Subject: [Numpy-discussion] Numpy and MKL, update In-Reply-To: <5b8d13220811131747i7c51173ch2587cd4aefe581b8@mail.gmail.com> References: <5b8d13220811131747i7c51173ch2587cd4aefe581b8@mail.gmail.com> Message-ID: <491CDD4B.6060601@gmail.com> David Cournapeau wrote: > On Fri, Nov 14, 2008 at 5:23 AM, frank wang wrote: >> Hi, Hi, >> Can you provide a working example to build Numpy with MKL in window and >> linux? >> The reason I am thinking to build the system is that I need to make the >> speed match with matlab. > > The MKL will only help you for linear algebra, and more specifically > for big matrices. If you build your own atlas, you can easily match > matlab speed in that area, I think. That is pretty much true in my experience for anything but Core2 Intel CPUs where GotoBLAS and the latest MKL have about a 25% advantage for large problems. That is to a large extend fixed in the development version of ATLAS, i.e. 3.9.4, where on Core2 the advantage melts to about 5% to 8%. Clint Whaley gave a talk at the BOF linear algebra session of Sage Days 11 this week, but his slides are not up in the wiki yet. The advantage of the MKL is that one library works more or less optimal on all platforms, i.e. with and without SSE2 for example since the "right" routines are selected at run time. That makes the MKL much larger, too, so depending on what your goal is either one could be "better". > David Cheers, Michael > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From rob.clewley at gmail.com Thu Nov 13 21:27:31 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Thu, 13 Nov 2008 21:27:31 -0500 Subject: [Numpy-discussion] ANN: I wrote some Numpy + SWIG + MinGW simple examples In-Reply-To: <491CB901.3060604@gmail.com> References: <491CB901.3060604@gmail.com> Message-ID: On Thu, Nov 13, 2008 at 6:32 PM, Egor Zindy wrote: > To get my head round the numpy.i interface for SWIG, I wrote some simple > examples and documented them as much as possible. The result is here: Awesome. That will be very helpful to me, and I'm sure to others too. I know some don't seem to consider SWIG so useful these days, but I rely on it a lot, and it does its job pretty well. Apart from some of the details concerning windows paths and mingw, it looks like your information is equally useful for non-windows SWIG usage too. I'll put up some links to it in due course, FWIW. Thanks, Rob From charlesr.harris at gmail.com Thu Nov 13 21:37:50 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 13 Nov 2008 19:37:50 -0700 Subject: [Numpy-discussion] Numpy and MKL, update In-Reply-To: <5b8d13220811131747i7c51173ch2587cd4aefe581b8@mail.gmail.com> References: <5b8d13220811131747i7c51173ch2587cd4aefe581b8@mail.gmail.com> Message-ID: On Thu, Nov 13, 2008 at 6:47 PM, David Cournapeau wrote: > On Fri, Nov 14, 2008 at 5:23 AM, frank wang wrote: > > Hi, > > > > Can you provide a working example to build Numpy with MKL in window and > > linux? > > The reason I am thinking to build the system is that I need to make the > > speed match with matlab. > > The MKL will only help you for linear algebra, and more specifically > for big matrices. If you build your own atlas, you can easily match > matlab speed in that area, I think. > There used to be an ATLAS folder in Matlab. I expect they still use it for non-Intel architectures. Chuc -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Nov 13 23:05:11 2008 From: cournape at gmail.com (David Cournapeau) Date: Fri, 14 Nov 2008 13:05:11 +0900 Subject: [Numpy-discussion] Numpy and MKL, update In-Reply-To: References: <5b8d13220811131747i7c51173ch2587cd4aefe581b8@mail.gmail.com> Message-ID: <5b8d13220811132005g33896fb4yfb6ec0a6c14a6ec9@mail.gmail.com> On Fri, Nov 14, 2008 at 10:58 AM, David Warde-Farley wrote: > > On 13-Nov-08, at 8:47 PM, David Cournapeau wrote: > >> On Fri, Nov 14, 2008 at 5:23 AM, frank wang wrote: >>> Hi, >>> >>> Can you provide a working example to build Numpy with MKL in window >>> and >>> linux? >>> The reason I am thinking to build the system is that I need to make >>> the >>> speed match with matlab. >> >> The MKL will only help you for linear algebra, and more specifically >> for big matrices. If you build your own atlas, you can easily match >> matlab speed in that area, I think. > > > Also make sure that you have a compiler suitable for building ATLAS. > Basically, if your gcc is older than 4.2, *build a newer gcc and > gfortran first*, then build ATLAS with that. > > From http://math-atlas.sourceforge.net/errata.html > "The ATLAS architectural defaults were all build with gcc 4.2, except > on IRIX/MIPS, where gcc was outperformed by SGI's cc. If you use gcc > 4.0 or 4.1, then your performance will be cut roughly in half on all > x86 platforms, so such users should install and use gcc 4.2." > I am not sure it is as relevant as before on recent CPU. For example, on a core 2 duo on RHEL (which does not have gcc 4.2), atlas can reach more decent performances. The problem was big for Pentium 4 (in part because the x87 FPU was bad on that architecture, and other oddities like a deficient cache L1). David From cournape at gmail.com Thu Nov 13 23:11:07 2008 From: cournape at gmail.com (David Cournapeau) Date: Fri, 14 Nov 2008 13:11:07 +0900 Subject: [Numpy-discussion] Numpy and MKL, update In-Reply-To: <491CDD4B.6060601@gmail.com> References: <5b8d13220811131747i7c51173ch2587cd4aefe581b8@mail.gmail.com> <491CDD4B.6060601@gmail.com> Message-ID: <5b8d13220811132011l51521177va7fd10960196cc13@mail.gmail.com> On Fri, Nov 14, 2008 at 11:07 AM, Michael Abshoff wrote: > David Cournapeau wrote: >> On Fri, Nov 14, 2008 at 5:23 AM, frank wang wrote: >>> Hi, > > Hi, > >>> Can you provide a working example to build Numpy with MKL in window and >>> linux? >>> The reason I am thinking to build the system is that I need to make the >>> speed match with matlab. >> >> The MKL will only help you for linear algebra, and more specifically >> for big matrices. If you build your own atlas, you can easily match >> matlab speed in that area, I think. > > That is pretty much true in my experience for anything but Core2 Intel > CPUs where GotoBLAS and the latest MKL have about a 25% advantage for > large problems. Note that I never said that ATLAS was faster than MKL/GotoBLAS :) I said you could match matlab performances (which itself, up to 6.* at least, used ATLAS; you could increase matlab performances by using your own ATLAS BTW). I don't think 25 % matter that much, because if it does, then you should not use python anyway in many cases (depends on the kind of problems of course, but I don't think most scientific problems reduce to just matrix product/inversion). > The advantage of the MKL is that one library works more or less optimal > on all platforms, i.e. with and without SSE2 for example since the > "right" routines are selected at run time. Agreed. As a numpy/scipy developer, I would actually be much more interested in work into that direction for ATLAS than trying to get a few % of peak speed. Deployment of ATLAS is really difficult ATM, and it means that practically, we lose a lot of performances because for distribution, you can't tune for every CPU out there, so we just use safe defaults. Same for linux distributions. It is a shame that Apple did not open source their Accelerate framework (based on ATLAS, at least for the BLAS/LAPACK part), because that's exactly what they did. David From michael.abshoff at googlemail.com Thu Nov 13 23:37:05 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Thu, 13 Nov 2008 20:37:05 -0800 Subject: [Numpy-discussion] Numpy and MKL, update In-Reply-To: <5b8d13220811132011l51521177va7fd10960196cc13@mail.gmail.com> References: <5b8d13220811131747i7c51173ch2587cd4aefe581b8@mail.gmail.com> <491CDD4B.6060601@gmail.com> <5b8d13220811132011l51521177va7fd10960196cc13@mail.gmail.com> Message-ID: <491D0071.8010503@gmail.com> David Cournapeau wrote: > On Fri, Nov 14, 2008 at 11:07 AM, Michael Abshoff > wrote: >> David Cournapeau wrote: >>> On Fri, Nov 14, 2008 at 5:23 AM, frank wang wrote: >>>> Hi, >> Hi, >> >>>> Can you provide a working example to build Numpy with MKL in window and >>>> linux? >>>> The reason I am thinking to build the system is that I need to make the >>>> speed match with matlab. >>> The MKL will only help you for linear algebra, and more specifically >>> for big matrices. If you build your own atlas, you can easily match >>> matlab speed in that area, I think. >> That is pretty much true in my experience for anything but Core2 Intel >> CPUs where GotoBLAS and the latest MKL have about a 25% advantage for >> large problems. > > Note that I never said that ATLAS was faster than MKL/GotoBLAS :) :) > I said you could match matlab performances (which itself, up to 6.* at > least, used ATLAS; you could increase matlab performances by using > your own ATLAS BTW). Yes, back in the day I got a three fold speedup for a certain workload in Matlab by replacing BLAS and UMFPACK libraries. > I don't think 25 % matter that much, because if > it does, then you should not use python anyway in many cases (depends > on the kind of problems of course, but I don't think most scientific > problems reduce to just matrix product/inversion). Sure, I agree here. But 25% performance for dgemm is significant for some workloads, but if you spend the vast majority of time in Python code it won't matter. And some times it is way more than that - see my remarks below. >> The advantage of the MKL is that one library works more or less optimal >> on all platforms, i.e. with and without SSE2 for example since the >> "right" routines are selected at run time. > > Agreed. As a numpy/scipy developer, I would actually be much more > interested in work into that direction for ATLAS than trying to get a > few % of peak speed. Note that selecting non SSE2 versions of ATLAS can cause a significant slowdown, i.e. one day not too long ago Ondrej Certik and I were sitting in IRC in #sage-devel benchmarking some things. His Debian install was a factor of 12 slower than the same software that he had build with Sage and in the end it boiled down to non-SSE2 ATLAS vs. SSE2 ATLAS. That is a freak case, but I am sure more than enough people will get bitten by that issue since they installed "ATLAS" in Debian, but did not know about SSE2 ATLAS. And a while back someone compared various numerical closed and open source projects in an article for some reknown Linux magazine, among them Sage. So they run a bunch of numerical benchmarks, namely FFT and SVD and Sage via numpy blew Matlab away by a factor of three for the SVD (The FFT looked not so good because Sage is still using GSL for FFT, but we will change that). Obviously that was not because numpy was clever about the SVD used (I know there are several version in Lapack, but the performance difference is usually small), but because Matlab used some generic version of BLAS (it was unclear form the article if it was MKL or ATLAS) and Sage used a custom build SSE2 version. The reviewer expressed admiration for numpy and its clever SVD implementation - Sigh. > Deployment of ATLAS is really difficult ATM, and > it means that practically, we lose a lot of performances because for > distribution, you can't tune for every CPU out there, so we just use > safe defaults. Same for linux distributions. It is a shame that Apple > did not open source their Accelerate framework (based on ATLAS, at > least for the BLAS/LAPACK part), because that's exactly what they did. Yes, Clint has been in contact with Apple, but never got anything out of them. Too bad. The new ATLAS release should fix some build issues regarding the dreaded timing tolerance issue and also will work much better with threads since Clint rewrote the threading module so that the memory allocation is no longer the bottle neck. He also added native threading support for Windows, but that is not being tested yet, so hopefully it will work in a future version. The main issue here is that for assembly support Clint relies on gcc which is hardcoded into the Makefiles, so we discussed various options how that can be avoided, but so far nor progress can be reported. > David Cheers, Michael > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From charlesr.harris at gmail.com Fri Nov 14 00:09:13 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 13 Nov 2008 22:09:13 -0700 Subject: [Numpy-discussion] Questions on error handling/refcounting in some ufunc object loops. Message-ID: Hi All (but mostly Travis), I've been looking at the object loops now that they are cleaned up, and it looks to me like the error handling is inconsistent and there are some reference counting errors. Warning, code ahead! /**begin repeat * #kind = equal, not_equal, greater, greater_equal, less, less_equal# * #OP = EQ, NE, GT, GE, LT, LE# */ static void OBJECT_ at kind@(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) { BINARY_LOOP { PyObject *in1 = *(PyObject **)ip1; PyObject *in2 = *(PyObject **)ip2; *((Bool *)op1) = (Bool) PyObject_RichCompareBool(in1, in2, Py_ at OP@); ^^^^^^^^^^^^^^^^^^^^^^^^ } } /**end repeat**/ PyObject_RichCompareBool returns -1 on error. This might be OK, but we probably don't want -1 in a boolean. Also, this behavior is inconsistent with some other loops. static void OBJECT_sign(char **args, intp *dimensions, intp *steps, void *NPY_UNUSED(func)) { PyObject *zero = PyInt_FromLong(0); UNARY_LOOP { PyObject *in1 = *(PyObject **)ip1; *((PyObject **)op1) = PyInt_FromLong(PyObject_Compare(in1, zero)); ^^^^^^^^^^^^^^ } Py_DECREF(zero); } PyInt_FromLong returns NULL on error. Also, it looks to me like the object in the array that is replaced needs its reference count decreased. /*UFUNC_API*/ static void PyUFunc_O_O_method(char **args, intp *dimensions, intp *steps, void *func) { char *meth = (char *)func; UNARY_LOOP { PyObject *in1 = *(PyObject **)ip1; PyObject **out = (PyObject **)op1; PyObject *ret = PyObject_CallMethod(in1, meth, NULL); if (ret == NULL) { return; } Py_XDECREF(*out); *out = ret; } } Here the return value is checked and there is an early return on error. /*UFUNC_API*/ static void PyUFunc_O_O(char **args, intp *dimensions, intp *steps, void *func) { unaryfunc f = (unaryfunc)func; UNARY_LOOP { PyObject *in1 = *(PyObject **)ip1; PyObject **out = (PyObject **)op1; PyObject *ret; if (in1 == NULL) { return; } ret = f(in1); if ((ret == NULL) || PyErr_Occurred()) { return; } Py_XDECREF(*out); *out = ret; } } Here the input value is also checked. The return value is checked, and PyErr_Occurred is also checked. I'm pretty sure about the reference leak. But what should be the standard for checking arguments and error returns in these object loops? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Nov 14 00:31:39 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 14 Nov 2008 14:31:39 +0900 Subject: [Numpy-discussion] Questions on error handling/refcounting in some ufunc object loops. In-Reply-To: References: Message-ID: <491D0D3B.6020609@ar.media.kyoto-u.ac.jp> Charles R Harris wrote: > > I'm pretty sure about the reference leak. But what should be the > standard for checking arguments and error returns in these object loops? I was wondering the same when I worked on that code a few weeks ago; since the ufunc "return" void, I wonder how feasible it would be to return an int instead for error messaging. Since the ufunc are not used directly outside numpy, it should not break the API/ABI in any way ? I think more generally it would be nice to have a common error system for the pure C code, because it is a bit of a mess right now. But that would be again a lot of work :) David From Catherine.M.Moroney at jpl.nasa.gov Fri Nov 14 00:47:03 2008 From: Catherine.M.Moroney at jpl.nasa.gov (Catherine Moroney) Date: Thu, 13 Nov 2008 21:47:03 -0800 Subject: [Numpy-discussion] array indexing question Message-ID: Hello, I know that there must be a fast way of solving this problem, but I don't know what it is. I have three arrays, with dimensions: A[np] L[np] S[np] where L and S indicate the line, smp co-ordinates for each of the "np" rows. I want to reconstruct the contents of [A] as a 2-dimensional matrix. The brain-dead version of what I want is: results = numpy.zeros(some_size, some_other_size) for idata in xrange(0,np): results[L[idata], S[idata]] = A[idata] but I'm dealing with large arrays and this is very slow. How do I speed it up? The array A is the result of performing kmeans clustering on a large 2-d image, presented as a 1-d vector and I want the result as a 2-d image. In the real application, I won't have data for each point in the 2-d image, so np will be less than the full image size. Catherine From david at ar.media.kyoto-u.ac.jp Fri Nov 14 00:38:27 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 14 Nov 2008 14:38:27 +0900 Subject: [Numpy-discussion] numpy, Py_ssize_t, cython and 64 bits python 2.4 In-Reply-To: <491CC57C.2020705@student.matnat.uio.no> References: <4916897C.7090901@ar.media.kyoto-u.ac.jp> <5b8d13220811091044w13a07bfdhc6a0b1e207425e4e@mail.gmail.com> <49177270.5020405@student.matnat.uio.no> <5b8d13220811091938m59c07b4l49e9db36b3c19bb6@mail.gmail.com> <5b8d13220811100238v1fb48d6eq68a0c7954f3ced8f@mail.gmail.com> <491CC57C.2020705@student.matnat.uio.no> Message-ID: <491D0ED3.2010307@ar.media.kyoto-u.ac.jp> Dag Sverre Seljebotn wrote: > For mailing list archival purposes, I'm posting the conclusion of this > story: An update to Cython which also works with NumPy/Python2.4/64-bit > can now be retrieved from http://hg.cython.org/cython > > That Mercurial repo contains previous release + bugfixes deemed to be > very safe, so I recommend using this instead for SciPy etc. until the > next Cython release. > Great. Thank you for your effort ! cheers, David From scott.sinclair.za at gmail.com Fri Nov 14 00:58:54 2008 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Fri, 14 Nov 2008 07:58:54 +0200 Subject: [Numpy-discussion] array indexing question In-Reply-To: References: Message-ID: <6a17e9ee0811132158w317253d7ibc2afade88aa41fd@mail.gmail.com> 2008/11/14 Catherine Moroney : > I have three arrays, with dimensions: > > A[np] > L[np] > S[np] > > where L and S indicate the line, smp co-ordinates for each of the > "np" rows. > I want to reconstruct the contents of [A] as a 2-dimensional matrix. > > The brain-dead version of what I want is: > > results = numpy.zeros(some_size, some_other_size) > for idata in xrange(0,np): > results[L[idata], S[idata]] = A[idata] > You can easily avoid the loop. Does this do what you want? >>> L = np.array([0, 2, 4]) >>> S = np.array([0, 1, 3]) >>> A = np.array([1.3, 2.3, 3.3]) >>> result = np.zeros((5, 5)) >>> result array([[ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.]]) >>> result[L, S] = A >>> result array([[ 1.3, 0. , 0. , 0. , 0. ], [ 0. , 0. , 0. , 0. , 0. ], [ 0. , 2.3, 0. , 0. , 0. ], [ 0. , 0. , 0. , 0. , 0. ], [ 0. , 0. , 0. , 3.3, 0. ]]) Cheers, Scott From robert.kern at gmail.com Fri Nov 14 00:58:59 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 13 Nov 2008 23:58:59 -0600 Subject: [Numpy-discussion] Questions on error handling/refcounting in some ufunc object loops. In-Reply-To: <491D0D3B.6020609@ar.media.kyoto-u.ac.jp> References: <491D0D3B.6020609@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730811132158m341f883yc19e1d413e065470@mail.gmail.com> On Thu, Nov 13, 2008 at 23:31, David Cournapeau wrote: > Charles R Harris wrote: >> >> I'm pretty sure about the reference leak. But what should be the >> standard for checking arguments and error returns in these object loops? > > I was wondering the same when I worked on that code a few weeks ago; > since the ufunc "return" void, I wonder how feasible it would be to > return an int instead for error messaging. Since the ufunc are not used > directly outside numpy, it should not break the API/ABI in any way ? There are ufunc loop implementations outside of numpy. It would break the API, specifically the typedef PyUFuncGenericFunction. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Fri Nov 14 01:00:31 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 13 Nov 2008 23:00:31 -0700 Subject: [Numpy-discussion] Questions on error handling/refcounting in some ufunc object loops. In-Reply-To: <491D0D3B.6020609@ar.media.kyoto-u.ac.jp> References: <491D0D3B.6020609@ar.media.kyoto-u.ac.jp> Message-ID: On Thu, Nov 13, 2008 at 10:31 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Charles R Harris wrote: > > > > I'm pretty sure about the reference leak. But what should be the > > standard for checking arguments and error returns in these object loops? > > I was wondering the same when I worked on that code a few weeks ago; > since the ufunc "return" void, I wonder how feasible it would be to > return an int instead for error messaging. Since the ufunc are not used > directly outside numpy, it should not break the API/ABI in any way ? > I had thoughts along the same line. I think we could easily make all the returns ints because the return value would simply be discarded. The question is what to do with it if we have it. > I think more generally it would be nice to have a common error system > for the pure C code, because it is a bit of a mess right now. But that > would be again a lot of work :) Yep. And tracking/checking all those errors back up the call chain is a pain. The best argument for C++ I can think of is the exception handling... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Nov 14 00:50:24 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 14 Nov 2008 14:50:24 +0900 Subject: [Numpy-discussion] Questions on error handling/refcounting in some ufunc object loops. In-Reply-To: <3d375d730811132158m341f883yc19e1d413e065470@mail.gmail.com> References: <491D0D3B.6020609@ar.media.kyoto-u.ac.jp> <3d375d730811132158m341f883yc19e1d413e065470@mail.gmail.com> Message-ID: <491D11A0.1020907@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > There are ufunc loop implementations outside of numpy. It would break > the API, specifically the typedef PyUFuncGenericFunction. > Would something like a ufunc-specific errno be acceptable in that case ? David From charlesr.harris at gmail.com Fri Nov 14 01:06:16 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 13 Nov 2008 23:06:16 -0700 Subject: [Numpy-discussion] Questions on error handling/refcounting in some ufunc object loops. In-Reply-To: <3d375d730811132158m341f883yc19e1d413e065470@mail.gmail.com> References: <491D0D3B.6020609@ar.media.kyoto-u.ac.jp> <3d375d730811132158m341f883yc19e1d413e065470@mail.gmail.com> Message-ID: On Thu, Nov 13, 2008 at 10:58 PM, Robert Kern wrote: > On Thu, Nov 13, 2008 at 23:31, David Cournapeau > wrote: > > Charles R Harris wrote: > >> > >> I'm pretty sure about the reference leak. But what should be the > >> standard for checking arguments and error returns in these object loops? > > > > I was wondering the same when I worked on that code a few weeks ago; > > since the ufunc "return" void, I wonder how feasible it would be to > > return an int instead for error messaging. Since the ufunc are not used > > directly outside numpy, it should not break the API/ABI in any way ? > > There are ufunc loop implementations outside of numpy. It would break > the API, specifically the typedef PyUFuncGenericFunction. > Good point! Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Nov 14 01:08:49 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 13 Nov 2008 23:08:49 -0700 Subject: [Numpy-discussion] Questions on error handling/refcounting in some ufunc object loops. In-Reply-To: <491D11A0.1020907@ar.media.kyoto-u.ac.jp> References: <491D0D3B.6020609@ar.media.kyoto-u.ac.jp> <3d375d730811132158m341f883yc19e1d413e065470@mail.gmail.com> <491D11A0.1020907@ar.media.kyoto-u.ac.jp> Message-ID: On Thu, Nov 13, 2008 at 10:50 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Robert Kern wrote: > > > > There are ufunc loop implementations outside of numpy. It would break > > the API, specifically the typedef PyUFuncGenericFunction. > > > > Would something like a ufunc-specific errno be acceptable in that case ? > That might get tricky with threading in the mix. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Nov 14 01:11:49 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 14 Nov 2008 00:11:49 -0600 Subject: [Numpy-discussion] Questions on error handling/refcounting in some ufunc object loops. In-Reply-To: <491D11A0.1020907@ar.media.kyoto-u.ac.jp> References: <491D0D3B.6020609@ar.media.kyoto-u.ac.jp> <3d375d730811132158m341f883yc19e1d413e065470@mail.gmail.com> <491D11A0.1020907@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730811132211t3e556441y15b75c71028d1f0d@mail.gmail.com> On Thu, Nov 13, 2008 at 23:50, David Cournapeau wrote: > Robert Kern wrote: >> >> There are ufunc loop implementations outside of numpy. It would break >> the API, specifically the typedef PyUFuncGenericFunction. > > Would something like a ufunc-specific errno be acceptable in that case ? No, please. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Fri Nov 14 01:01:29 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 14 Nov 2008 15:01:29 +0900 Subject: [Numpy-discussion] Questions on error handling/refcounting in some ufunc object loops. In-Reply-To: References: <491D0D3B.6020609@ar.media.kyoto-u.ac.jp> <3d375d730811132158m341f883yc19e1d413e065470@mail.gmail.com> <491D11A0.1020907@ar.media.kyoto-u.ac.jp> Message-ID: <491D1439.2010906@ar.media.kyoto-u.ac.jp> Charles R Harris wrote: > > > On Thu, Nov 13, 2008 at 10:50 PM, David Cournapeau > > > wrote: > > Robert Kern wrote: > > > > There are ufunc loop implementations outside of numpy. It would > break > > the API, specifically the typedef PyUFuncGenericFunction. > > > > Would something like a ufunc-specific errno be acceptable in that > case ? > > > That might get tricky with threading in the mix. The trick is at configuration, to know how to "tag" a variable for TLS. Windows can do it, posix has it (errno is thread-specific in recent posix). I don't know of any other solution, since returning error code is not possible, but maybe there is ? David From charlesr.harris at gmail.com Fri Nov 14 01:35:15 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 13 Nov 2008 23:35:15 -0700 Subject: [Numpy-discussion] Questions on error handling/refcounting in some ufunc object loops. In-Reply-To: <491D1439.2010906@ar.media.kyoto-u.ac.jp> References: <491D0D3B.6020609@ar.media.kyoto-u.ac.jp> <3d375d730811132158m341f883yc19e1d413e065470@mail.gmail.com> <491D11A0.1020907@ar.media.kyoto-u.ac.jp> <491D1439.2010906@ar.media.kyoto-u.ac.jp> Message-ID: On Thu, Nov 13, 2008 at 11:01 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Charles R Harris wrote: > > > > > > On Thu, Nov 13, 2008 at 10:50 PM, David Cournapeau > > > > > wrote: > > > > Robert Kern wrote: > > > > > > There are ufunc loop implementations outside of numpy. It would > > break > > > the API, specifically the typedef PyUFuncGenericFunction. > > > > > > > Would something like a ufunc-specific errno be acceptable in that > > case ? > > > > > > That might get tricky with threading in the mix. > > The trick is at configuration, to know how to "tag" a variable for TLS. > Windows can do it, posix has it (errno is thread-specific in recent > posix). I don't know of any other solution, since returning error code > is not possible, but maybe there is ? What happens if threading is turned on and a loop calls a python function that sets an error. I don't recall if the loops are executed with the global lock held. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Nov 14 01:39:47 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 13 Nov 2008 23:39:47 -0700 Subject: [Numpy-discussion] Questions on error handling/refcounting in some ufunc object loops. In-Reply-To: <491D1439.2010906@ar.media.kyoto-u.ac.jp> References: <491D0D3B.6020609@ar.media.kyoto-u.ac.jp> <3d375d730811132158m341f883yc19e1d413e065470@mail.gmail.com> <491D11A0.1020907@ar.media.kyoto-u.ac.jp> <491D1439.2010906@ar.media.kyoto-u.ac.jp> Message-ID: On Thu, Nov 13, 2008 at 11:01 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Charles R Harris wrote: > > > > > > On Thu, Nov 13, 2008 at 10:50 PM, David Cournapeau > > > > > wrote: > > > > Robert Kern wrote: > > > > > > There are ufunc loop implementations outside of numpy. It would > > break > > > the API, specifically the typedef PyUFuncGenericFunction. > > > > > > > Would something like a ufunc-specific errno be acceptable in that > > case ? > > > > > > That might get tricky with threading in the mix. > > The trick is at configuration, to know how to "tag" a variable for TLS. > Windows can do it, posix has it (errno is thread-specific in recent > posix). I don't know of any other solution, since returning error code > is not possible, but maybe there is ? > BTW, David, would you mind if I moved the rest of the function definitions into math_c99? I would like to separate the loops from the functions. It might also be nice to have a template somewhere to combine all the code pieces. Having includes scattered here and there through the code is a bit obscure. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Nov 14 01:50:47 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 14 Nov 2008 00:50:47 -0600 Subject: [Numpy-discussion] Questions on error handling/refcounting in some ufunc object loops. In-Reply-To: References: <491D0D3B.6020609@ar.media.kyoto-u.ac.jp> <3d375d730811132158m341f883yc19e1d413e065470@mail.gmail.com> <491D11A0.1020907@ar.media.kyoto-u.ac.jp> <491D1439.2010906@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730811132250x381c9f07t21f925afd8dbaab2@mail.gmail.com> On Fri, Nov 14, 2008 at 00:39, Charles R Harris wrote: > BTW, David, would you mind if I moved the rest of the function definitions > into math_c99? I would like to separate the loops from the functions. It > might also be nice to have a template somewhere to combine all the code > pieces. Having includes scattered here and there through the code is a bit > obscure. Ah, the T-word. :-) Have you taken a look at Tempita? http://pythonpaste.org/tempita/ It's a small, single-file implementation, and its syntax is Djangoish but it doesn't have some of Django's limitations that make Django templates awkward for code generation. In particular, you can use pretty much arbitrary Python in the template and don't have to register a new tag just to call an arbitrary function. If you want to use Tempita to redo our code generation, I withdraw my old objections. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Fri Nov 14 02:12:13 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 14 Nov 2008 00:12:13 -0700 Subject: [Numpy-discussion] Questions on error handling/refcounting in some ufunc object loops. In-Reply-To: <3d375d730811132250x381c9f07t21f925afd8dbaab2@mail.gmail.com> References: <491D0D3B.6020609@ar.media.kyoto-u.ac.jp> <3d375d730811132158m341f883yc19e1d413e065470@mail.gmail.com> <491D11A0.1020907@ar.media.kyoto-u.ac.jp> <491D1439.2010906@ar.media.kyoto-u.ac.jp> <3d375d730811132250x381c9f07t21f925afd8dbaab2@mail.gmail.com> Message-ID: On Thu, Nov 13, 2008 at 11:50 PM, Robert Kern wrote: > On Fri, Nov 14, 2008 at 00:39, Charles R Harris > wrote: > > BTW, David, would you mind if I moved the rest of the function > definitions > > into math_c99? I would like to separate the loops from the functions. It > > might also be nice to have a template somewhere to combine all the code > > pieces. Having includes scattered here and there through the code is a > bit > > obscure. > > Ah, the T-word. :-) > > Have you taken a look at Tempita? > > http://pythonpaste.org/tempita/ > > It's a small, single-file implementation, and its syntax is Djangoish > but it doesn't have some of Django's limitations that make Django > templates awkward for code generation. In particular, you can use > pretty much arbitrary Python in the template and don't have to > register a new tag just to call an arbitrary function. If you want to > use Tempita to redo our code generation, I withdraw my old objections. > I'm actually pretty happy with the current system now that it has nested loops and continuation lines. It's small and does no more than it needs to. Capitalization is the only additional feature that catches my attention now days, but it doesn't move me enough to do anything about it. Anyway, I was thinking more along the lines of the text api lists we have. Something like: [_umathmodule.c] math_c99.c.src : generate umathmodule.c.src : generate ufuncobject.c _umath_generated.c _ufunc_api.c Something like that, anyway. It would be easier than grepping through the source or putting a list in some obscure setup.py file. Yes, scons and setup can be made to do these things, I just think it would be nice to separate the data from the code. Although I suppose such a file is actually an example of domain specific code. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Nov 14 02:14:57 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 14 Nov 2008 16:14:57 +0900 Subject: [Numpy-discussion] toward python 2.6: mtrand, gettimeofday and mingw Message-ID: <491D2571.3060101@ar.media.kyoto-u.ac.jp> Hi, Since I think it would be nice for numpy 1.3 to support python 2.6, I took a further look at it on windows. AFAICS, there is only one problem remaining, when building with mingw. The error is exactly as in the following bug report: http://bugs.python.org/issue3308 I am not sure I understand exactly the problem in numpy case: the failure appears in mtrand, because of the function gettimeofday. I can't find gettimeofday on MSDN (google only returns link to a win32 replacement for gettimeofday), so I don't understand why it works fine when compiling with VS 2008. Since the problem is specific to python 2.6, which depends on the MS C runtime 9.0 (the one with Visual Studio 2008), I *guess* that mingw has an implementation of gettimeofday internally which depend on those time functions, and since its MS runtime 9.0 wrapper for it (libmsvcr90.a) wrongly defines some time-related functions which are not exported by the MS runtime, it fails. Anyway, one solution would be to implement our own gettimeofday from MS specific functions (such code is available on the internet, and is only a few lines), or use directly MS specific functions. My understanding is that since this is used for the random seed, the exact time returned by the function does not matter much, but it would mean that the seed would be different from what it used to be, and some people complained recently about similar issues. Of course, the right fix is for mingw to fix its def files for msvc runtimes, but given that last mingw release was a long time ago, I don't think we would be able to build numpy 1.3 with it without workaround. David From charlesr.harris at gmail.com Fri Nov 14 02:39:30 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 14 Nov 2008 00:39:30 -0700 Subject: [Numpy-discussion] toward python 2.6: mtrand, gettimeofday and mingw In-Reply-To: <491D2571.3060101@ar.media.kyoto-u.ac.jp> References: <491D2571.3060101@ar.media.kyoto-u.ac.jp> Message-ID: On Fri, Nov 14, 2008 at 12:14 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Hi, > > Since I think it would be nice for numpy 1.3 to support python 2.6, > I took a further look at it on windows. AFAICS, there is only one > problem remaining, when building with mingw. The error is exactly as in > the following bug report: > > http://bugs.python.org/issue3308 > > I am not sure I understand exactly the problem in numpy case: the > failure appears in mtrand, because of the function gettimeofday. I can't > find gettimeofday on MSDN (google only returns link to a win32 > replacement for gettimeofday), so I don't understand why it works fine > when compiling with VS 2008. > > Since the problem is specific to python 2.6, which depends on the MS C > runtime 9.0 (the one with Visual Studio 2008), I *guess* that mingw has > an implementation of gettimeofday internally which depend on those time > functions, and since its MS runtime 9.0 wrapper for it (libmsvcr90.a) > wrongly defines some time-related functions which are not exported by > the MS runtime, it fails. > > Anyway, one solution would be to implement our own gettimeofday from MS > specific functions (such code is available on the internet, and is only > a few lines), or use directly MS specific functions. My understanding is > that since this is used for the random seed, the exact time returned by > the function does not matter much, but it would mean that the seed would > be different from what it used to be, and some people complained > recently about similar issues. Of course, the right fix is for mingw to > fix its def files for msvc runtimes, but given that last mingw release > was a long time ago, I don't think we would be able to build numpy 1.3 > with it without workaround. > How is 1.3dev compiling with MSVC these days? It's working on the buildbot now. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Nov 14 02:40:20 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 14 Nov 2008 01:40:20 -0600 Subject: [Numpy-discussion] toward python 2.6: mtrand, gettimeofday and mingw In-Reply-To: <491D2571.3060101@ar.media.kyoto-u.ac.jp> References: <491D2571.3060101@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730811132340v1e208ab2q9b545fea6af33831@mail.gmail.com> On Fri, Nov 14, 2008 at 01:14, David Cournapeau wrote: > Hi, > > Since I think it would be nice for numpy 1.3 to support python 2.6, > I took a further look at it on windows. AFAICS, there is only one > problem remaining, when building with mingw. The error is exactly as in > the following bug report: > > http://bugs.python.org/issue3308 > > I am not sure I understand exactly the problem in numpy case: the > failure appears in mtrand, because of the function gettimeofday. I can't > find gettimeofday on MSDN (google only returns link to a win32 > replacement for gettimeofday), so I don't understand why it works fine > when compiling with VS 2008. > > Since the problem is specific to python 2.6, which depends on the MS C > runtime 9.0 (the one with Visual Studio 2008), I *guess* that mingw has > an implementation of gettimeofday internally which depend on those time > functions, and since its MS runtime 9.0 wrapper for it (libmsvcr90.a) > wrongly defines some time-related functions which are not exported by > the MS runtime, it fails. > > Anyway, one solution would be to implement our own gettimeofday from MS > specific functions (such code is available on the internet, and is only > a few lines), or use directly MS specific functions. My understanding is > that since this is used for the random seed, the exact time returned by > the function does not matter much, but it would mean that the seed would > be different from what it used to be, and some people complained > recently about similar issues. Of course, the right fix is for mingw to > fix its def files for msvc runtimes, but given that last mingw release > was a long time ago, I don't think we would be able to build numpy 1.3 > with it without workaround. Feel free to change it to whatever works. The point of that seeding mode is that it *is* different each time you call it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Fri Nov 14 02:26:59 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 14 Nov 2008 16:26:59 +0900 Subject: [Numpy-discussion] toward python 2.6: mtrand, gettimeofday and mingw In-Reply-To: References: <491D2571.3060101@ar.media.kyoto-u.ac.jp> Message-ID: <491D2843.9050007@ar.media.kyoto-u.ac.jp> Charles R Harris wrote: > > How is 1.3dev compiling with MSVC these days? It's working on the > buildbot now. It works, the MSVC problems are on 64 bits, which are of different nature, I think. Someone else can of course look at it, but I won't have the time to do it myself. David From nwagner at iam.uni-stuttgart.de Fri Nov 14 02:42:08 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 14 Nov 2008 08:42:08 +0100 Subject: [Numpy-discussion] numpy-docs and sphinx In-Reply-To: References: Message-ID: On Thu, 13 Nov 2008 22:25:05 +0000 (UTC) Pauli Virtanen wrote: > Thu, 13 Nov 2008 20:31:05 +0100, Nils Wagner wrote: > [clip] >> make html in numpy-docs works for me now, but make html >>in scipy-docs >> failed > > Yep, there was bad Latex in the docstrings of >cluster.hierarchy, which > made sphinx.ext.pngmath to crash. I've fixed the >docstrings now. > > Pauli > Hi Pauli, Is this source outdated svn co http://svn.python.org/projects/doctools/trunk sphinx-trunk ? I mean can I still use it or should I switch to hg clone http://bitbucket.org/birkenfeld/sphinx Is it possible to obtain sphinx without hg (mercurial) ? Here is a new problem concering make html in numpy-docs. http://docs.python.org/dev/library/functions.html Intersphinx hit: exceptions.StopIteration http://docs.python.org/dev/library/exceptions.html Math extension error: latex exited with error: [stderr] [stdout] This is TeX, Version 3.14159 (Web2C 7.4.5) (./math.tex LaTeX2e <2001/06/01> Babel and hyphenation patterns for american, french, german, ngerman, n ohyphenation, loaded. (/usr/share/texmf/tex/latex/base/article.cls Document Class: article 2001/04/21 v1.4e Standard LaTeX document class (/usr/share/texmf/tex/latex/base/size12.clo)) (/usr/share/texmf/tex/latex/base/inputenc.sty ! LaTeX Error: File `utf8.def' not found. Type X to quit or to proceed, or enter new name. (Default extension: def) Enter file name: ! Emergency stop. l.118 \endinput ^^M No pages of output. Transcript written on math.log. make: *** [html] Error 1 Any pointer would be appreciated. Nils From charlesr.harris at gmail.com Fri Nov 14 02:55:51 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 14 Nov 2008 00:55:51 -0700 Subject: [Numpy-discussion] toward python 2.6: mtrand, gettimeofday and mingw In-Reply-To: <491D2843.9050007@ar.media.kyoto-u.ac.jp> References: <491D2571.3060101@ar.media.kyoto-u.ac.jp> <491D2843.9050007@ar.media.kyoto-u.ac.jp> Message-ID: On Fri, Nov 14, 2008 at 12:26 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Charles R Harris wrote: > > > > How is 1.3dev compiling with MSVC these days? It's working on the > > buildbot now. > > It works, the MSVC problems are on 64 bits, which are of different > nature, I think. Someone else can of course look at it, but I won't have > the time to do it myself. > The buildbot is 64 bits : Windows_XP_x86_64_MSVC. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Nov 14 03:38:59 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 14 Nov 2008 01:38:59 -0700 Subject: [Numpy-discussion] toward python 2.6: mtrand, gettimeofday and mingw In-Reply-To: References: <491D2571.3060101@ar.media.kyoto-u.ac.jp> <491D2843.9050007@ar.media.kyoto-u.ac.jp> Message-ID: On Fri, Nov 14, 2008 at 12:55 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Fri, Nov 14, 2008 at 12:26 AM, David Cournapeau < > david at ar.media.kyoto-u.ac.jp> wrote: > >> Charles R Harris wrote: >> > >> > How is 1.3dev compiling with MSVC these days? It's working on the >> > buildbot now. >> >> It works, the MSVC problems are on 64 bits, which are of different >> nature, I think. Someone else can of course look at it, but I won't have >> the time to do it myself. >> > > The buildbot is 64 bits : Windows_XP_x86_64_MSVC. > It builds on Debian SPARC lenny also. The freeBSD buildbot has a configuration problem because Daniel tried to help you out by building ATLAS, but I think 1.3 is now building on all the standard platforms. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Nov 14 04:21:17 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 14 Nov 2008 09:21:17 +0000 (UTC) Subject: [Numpy-discussion] numpy-docs and sphinx References: Message-ID: Fri, 14 Nov 2008 08:42:08 +0100, Nils Wagner wrote: [clip] > Is this source outdated > svn co http://svn.python.org/projects/doctools/trunk sphinx-trunk ? I believe it's outdated. > I mean can I still use it or should I switch to > > hg clone http://bitbucket.org/birkenfeld/sphinx > > Is it possible to obtain sphinx without hg (mercurial) ? There's a download link on the above page on the right that gives you a zip or a tarball of the sources. But Sphinx 0.5 should come out "soon", AFAIK. > Here is a new problem concering make html in numpy-docs. > > http://docs.python.org/dev/library/functions.html Intersphinx hit: > exceptions.StopIteration > http://docs.python.org/dev/library/exceptions.html Math extension error: > latex exited with error: > [stderr] > > [stdout] > This is TeX, Version 3.14159 (Web2C 7.4.5) (./math.tex > LaTeX2e <2001/06/01> > Babel and hyphenation patterns for american, french, german, > ngerman, n > ohyphenation, loaded. > (/usr/share/texmf/tex/latex/base/article.cls Document Class: article > 2001/04/21 v1.4e Standard LaTeX document class > (/usr/share/texmf/tex/latex/base/size12.clo)) > (/usr/share/texmf/tex/latex/base/inputenc.sty > > ! LaTeX Error: File `utf8.def' not found. > > Type X to quit or to proceed, or enter new name. (Default > extension: def) > > Enter file name: > ! Emergency stop. > > > l.118 \endinput > ^^M > No pages of output. > Transcript written on math.log. Strange, 'utf8.def' is typically a part of a standard Latex installation, at least (I think) if it is not a very old one. Are you able to upgrade your Latex, or get the file eg. from CTAN? If not, you can just edit sphinx/ext/pngmath.py, and adjust the \usepackage[utf8]{inputenc} line in it. Pauli From david at ar.media.kyoto-u.ac.jp Fri Nov 14 04:11:53 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 14 Nov 2008 18:11:53 +0900 Subject: [Numpy-discussion] toward python 2.6: mtrand, gettimeofday and mingw In-Reply-To: References: <491D2571.3060101@ar.media.kyoto-u.ac.jp> <491D2843.9050007@ar.media.kyoto-u.ac.jp> Message-ID: <491D40D9.1070109@ar.media.kyoto-u.ac.jp> Charles R Harris wrote: > > > It builds on Debian SPARC lenny also. The freeBSD buildbot has a > configuration problem because Daniel tried to help you out by > building ATLAS, but I think 1.3 is now building on all the standard > platforms. FreeBSD ATLAS port looked buggy to me when I tried, because it depended on some math functions like sqrtl which are not in the FreeBSD library (they are commented out in the math.h, and the symbols are not in the library). If you build without ATLAS, it works OK, so for me, this has nothing to do with us anymore. I can point out where to look to someone who uses FreeBSD, though. cheers, David From soren.skou.nielsen at gmail.com Fri Nov 14 04:30:00 2008 From: soren.skou.nielsen at gmail.com (=?ISO-8859-1?Q?S=F8ren_Nielsen?=) Date: Fri, 14 Nov 2008 10:30:00 +0100 Subject: [Numpy-discussion] ANN: I wrote some Numpy + SWIG + MinGW simple examples In-Reply-To: <491CB901.3060604@gmail.com> References: <491CB901.3060604@gmail.com> Message-ID: Hi Egor, Thanks for a very nice tutorial! Have you tried doing manipulations with 2D arrays?? or do you know how to tackle it? Regards, Soren On Fri, Nov 14, 2008 at 12:32 AM, Egor Zindy wrote: > Hello list! > > To get my head round the numpy.i interface for SWIG, I wrote some simple > examples and documented them as much as possible. The result is here: > > http://code.google.com/p/ezwidgets/wiki/NumpySWIGMinGW > > I finally got round testing ARGOUTVIEW_ARRAY1 today, so it's time to ask > for some feedback. > > Questions I still have: > * Any way of doing array_out = function(array_in) without using > ARGOUTVIEW_ARRAY1? > * Any clean way of generating an exception on failed memory allocations? > > Hope this helps someone else... and thank you Bill Spotz for your > original article and comments! > > Regards, > Egor > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Nov 14 04:39:47 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 14 Nov 2008 02:39:47 -0700 Subject: [Numpy-discussion] toward python 2.6: mtrand, gettimeofday and mingw In-Reply-To: <491D40D9.1070109@ar.media.kyoto-u.ac.jp> References: <491D2571.3060101@ar.media.kyoto-u.ac.jp> <491D2843.9050007@ar.media.kyoto-u.ac.jp> <491D40D9.1070109@ar.media.kyoto-u.ac.jp> Message-ID: On Fri, Nov 14, 2008 at 2:11 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Charles R Harris wrote: > > > > > > It builds on Debian SPARC lenny also. The freeBSD buildbot has a > > configuration problem because Daniel tried to help you out by > > building ATLAS, but I think 1.3 is now building on all the standard > > platforms. > > FreeBSD ATLAS port looked buggy to me when I tried, because it depended > on some math functions like sqrtl which are not in the FreeBSD library > (they are commented out in the math.h, and the symbols are not in the > library). If you build without ATLAS, it works OK, so for me, this has > nothing to do with us anymore. > That's what I figured. The problem with the windows buildbot was a conflict with the builtin intrinsic function tanhf, which wasn't in the library. The solution was to do static float npy_tanhf( ... #define tanhf npy_tanhf and so avoid the name clash. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezindy at gmail.com Fri Nov 14 04:44:15 2008 From: ezindy at gmail.com (Egor Zindy) Date: Fri, 14 Nov 2008 09:44:15 +0000 Subject: [Numpy-discussion] ANN: I wrote some Numpy + SWIG + MinGW simple examples In-Reply-To: References: <491CB901.3060604@gmail.com> Message-ID: <491D486F.6090000@gmail.com> Hi Soren, I noticed the same thing about my document, nothing about 2-D arrays yet! I will try and write some examples using the same document structure (argout, inplace and argoutview) and then condense the document (for instance, no need for a separate setup.py for each example). Regards, Egor S?ren Nielsen wrote: > Hi Egor, > > Thanks for a very nice tutorial! Have you tried doing manipulations > with 2D arrays?? or do you know how to tackle it? > > Regards, > Soren > > On Fri, Nov 14, 2008 at 12:32 AM, Egor Zindy > wrote: > > Hello list! > > To get my head round the numpy.i interface for SWIG, I wrote some > simple > examples and documented them as much as possible. The result is here: > > http://code.google.com/p/ezwidgets/wiki/NumpySWIGMinGW > > I finally got round testing ARGOUTVIEW_ARRAY1 today, so it's time > to ask > for some feedback. > > Questions I still have: > * Any way of doing array_out = function(array_in) without using > ARGOUTVIEW_ARRAY1? > * Any clean way of generating an exception on failed memory > allocations? > > Hope this helps someone else... and thank you Bill Spotz for your > original article and comments! > > Regards, > Egor > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From nwagner at iam.uni-stuttgart.de Fri Nov 14 08:13:31 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 14 Nov 2008 14:13:31 +0100 Subject: [Numpy-discussion] numpy-docs and sphinx In-Reply-To: References: Message-ID: On Fri, 14 Nov 2008 09:21:17 +0000 (UTC) Pauli Virtanen wrote: >Fri, 14 Nov 2008 08:42:08 +0100, Nils Wagner wrote: > [clip] >> Is this source outdated >> svn co http://svn.python.org/projects/doctools/trunk >>sphinx-trunk ? > > I believe it's outdated. > >> I mean can I still use it or should I switch to >> >> hg clone http://bitbucket.org/birkenfeld/sphinx >> >> Is it possible to obtain sphinx without hg (mercurial) ? > > There's a download link on the above page on the right >that gives you a > zip or a tarball of the sources. > > But Sphinx 0.5 should come out "soon", AFAIK. > >> Here is a new problem concering make html in numpy-docs. >> >> http://docs.python.org/dev/library/functions.html >>Intersphinx hit: >> exceptions.StopIteration >> http://docs.python.org/dev/library/exceptions.html Math >>extension error: >> latex exited with error: >> [stderr] >> >> [stdout] >> This is TeX, Version 3.14159 (Web2C 7.4.5) (./math.tex >> LaTeX2e <2001/06/01> >> Babel and hyphenation patterns for american, >>french, german, >> ngerman, n >> ohyphenation, loaded. >> (/usr/share/texmf/tex/latex/base/article.cls Document >>Class: article >> 2001/04/21 v1.4e Standard LaTeX document class >> (/usr/share/texmf/tex/latex/base/size12.clo)) >> (/usr/share/texmf/tex/latex/base/inputenc.sty >> >> ! LaTeX Error: File `utf8.def' not found. >> >> Type X to quit or to proceed, or enter new >>name. (Default >> extension: def) >> >> Enter file name: >> ! Emergency stop. >> >> >> l.118 \endinput >> ^^M >> No pages of output. >> Transcript written on math.log. > > Strange, 'utf8.def' is typically a part of a standard >Latex installation, > at least (I think) if it is not a very old one. Are you >able to upgrade > your Latex, or get the file eg. from CTAN? > > If not, you can just edit sphinx/ext/pngmath.py, and >adjust the > \usepackage[utf8]{inputenc} line in it. > > Pauli > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion I got a copy of utf8.def ftp://ftp.mpi-sb.mpg.de/pub/tex/mirror/ftp.dante.de/pub/tex/macros/latex/contrib/t2/etc/utf-8/utf-8.def Where should I store that file ? Nils From eads at soe.ucsc.edu Fri Nov 14 08:14:11 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Fri, 14 Nov 2008 05:14:11 -0800 Subject: [Numpy-discussion] numpy-docs and sphinx In-Reply-To: References: Message-ID: <91b4b1ab0811140514h187d72c2m9604d3460b98f9d8@mail.gmail.com> On Fedora, I recall having to install a separate package to get utf support. If you are using Fedora, try yum list | grep utf | grep tex On 11/14/08, Pauli Virtanen wrote: > Fri, 14 Nov 2008 08:42:08 +0100, Nils Wagner wrote: > [clip] >> Is this source outdated >> svn co http://svn.python.org/projects/doctools/trunk sphinx-trunk ? > > I believe it's outdated. > >> I mean can I still use it or should I switch to >> >> hg clone http://bitbucket.org/birkenfeld/sphinx >> >> Is it possible to obtain sphinx without hg (mercurial) ? > > There's a download link on the above page on the right that gives you a > zip or a tarball of the sources. > > But Sphinx 0.5 should come out "soon", AFAIK. > >> Here is a new problem concering make html in numpy-docs. >> >> http://docs.python.org/dev/library/functions.html Intersphinx hit: >> exceptions.StopIteration >> http://docs.python.org/dev/library/exceptions.html Math extension error: >> latex exited with error: >> [stderr] >> >> [stdout] >> This is TeX, Version 3.14159 (Web2C 7.4.5) (./math.tex >> LaTeX2e <2001/06/01> >> Babel and hyphenation patterns for american, french, german, >> ngerman, n >> ohyphenation, loaded. >> (/usr/share/texmf/tex/latex/base/article.cls Document Class: article >> 2001/04/21 v1.4e Standard LaTeX document class >> (/usr/share/texmf/tex/latex/base/size12.clo)) >> (/usr/share/texmf/tex/latex/base/inputenc.sty >> >> ! LaTeX Error: File `utf8.def' not found. >> >> Type X to quit or to proceed, or enter new name. (Default >> extension: def) >> >> Enter file name: >> ! Emergency stop. >> >> >> l.118 \endinput >> ^^M >> No pages of output. >> Transcript written on math.log. > > Strange, 'utf8.def' is typically a part of a standard Latex installation, > at least (I think) if it is not a very old one. Are you able to upgrade > your Latex, or get the file eg. from CTAN? > > If not, you can just edit sphinx/ext/pngmath.py, and adjust the > \usepackage[utf8]{inputenc} line in it. > > Pauli > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- Sent from my mobile device ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From pav at iki.fi Fri Nov 14 08:59:20 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 14 Nov 2008 13:59:20 +0000 (UTC) Subject: [Numpy-discussion] numpy-docs and sphinx References: Message-ID: Fri, 14 Nov 2008 14:13:31 +0100, Nils Wagner wrote: [clip] > I got a copy of utf8.def > ftp://ftp.mpi-sb.mpg.de/pub/tex/mirror/ftp.dante.de/pub/tex/macros/ latex/contrib/t2/etc/utf-8/utf-8.def > > Where should I store that file ? See Damian's mail first -- if you have to just to install some package. But Latex looks for files in directories pointed to by the environment variable TEXINPUTS. So you could do: mkdir $HOME/texmf export TEXINPUTS=".:$HOME/texmf//:" [save utf8.def into ~/texmf] mktexlsr $HOME/texmf If inputenc doesn't find the file after that, I don't know what you should do. Pauli From cournape at gmail.com Fri Nov 14 12:13:05 2008 From: cournape at gmail.com (David Cournapeau) Date: Sat, 15 Nov 2008 02:13:05 +0900 Subject: [Numpy-discussion] toward python 2.6: mtrand, gettimeofday and mingw In-Reply-To: References: <491D2571.3060101@ar.media.kyoto-u.ac.jp> <491D2843.9050007@ar.media.kyoto-u.ac.jp> <491D40D9.1070109@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220811140913y222ec5f4g9f1ce84aab536461@mail.gmail.com> On Fri, Nov 14, 2008 at 6:39 PM, Charles R Harris wrote: > static float npy_tanhf( > ... > #define tanhf npy_tanhf I don't like this solution so much. The right solution IMHO is to correctly detect the intrinsic so we don't define a function already available. In the mean time, I of course realized that the windows situation is more complicated than expected: we need to deal with the manifest crap for the configuration stage of numpy because the msvcrt 9.0 is not available on the average windows computer... David From charlesr.harris at gmail.com Fri Nov 14 12:42:30 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 14 Nov 2008 10:42:30 -0700 Subject: [Numpy-discussion] toward python 2.6: mtrand, gettimeofday and mingw In-Reply-To: <5b8d13220811140913y222ec5f4g9f1ce84aab536461@mail.gmail.com> References: <491D2571.3060101@ar.media.kyoto-u.ac.jp> <491D2843.9050007@ar.media.kyoto-u.ac.jp> <491D40D9.1070109@ar.media.kyoto-u.ac.jp> <5b8d13220811140913y222ec5f4g9f1ce84aab536461@mail.gmail.com> Message-ID: On Fri, Nov 14, 2008 at 10:13 AM, David Cournapeau wrote: > On Fri, Nov 14, 2008 at 6:39 PM, Charles R Harris > wrote: > > > static float npy_tanhf( > > ... > > #define tanhf npy_tanhf > > I don't like this solution so much. The right solution IMHO is to > correctly detect the intrinsic so we don't define a function already > available. > We need a function with a pointer that can called by the ufunc. I was guessing that with /Ox optimization windows might be using a hardware instructions where it could and MS forgot to put a corresponding version in the library. In any case, the current code is safe, if ugly, and if you can get the detection set up right it won't get in the way. > > In the mean time, I of course realized that the windows situation is > more complicated than expected: we need to deal with the manifest crap > for the configuration stage of numpy because the msvcrt 9.0 is not > available on the average windows computer... > Windows Nada, the new entry level version.... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Nov 14 12:58:54 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 14 Nov 2008 10:58:54 -0700 Subject: [Numpy-discussion] toward python 2.6: mtrand, gettimeofday and mingw In-Reply-To: References: <491D2571.3060101@ar.media.kyoto-u.ac.jp> <491D2843.9050007@ar.media.kyoto-u.ac.jp> <491D40D9.1070109@ar.media.kyoto-u.ac.jp> <5b8d13220811140913y222ec5f4g9f1ce84aab536461@mail.gmail.com> Message-ID: On Fri, Nov 14, 2008 at 10:42 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Fri, Nov 14, 2008 at 10:13 AM, David Cournapeau wrote: > >> On Fri, Nov 14, 2008 at 6:39 PM, Charles R Harris >> wrote: >> >> > static float npy_tanhf( >> > ... >> > #define tanhf npy_tanhf >> >> I don't like this solution so much. The right solution IMHO is to >> correctly detect the intrinsic so we don't define a function already >> available. >> > > We need a function with a pointer that can called by the ufunc. I was > guessing that with /Ox optimization windows might be using a hardware > instructions where it could and MS forgot to put a corresponding version in > the library. In any case, the current code is safe, if ugly, and if you can > get the detection set up right it won't get in the way. > > So maybe the thing to check is the presence of the pointer, something like float (*foo)() = barf; Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Nov 14 17:53:33 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 14 Nov 2008 15:53:33 -0700 Subject: [Numpy-discussion] How to force build of included LAPACK? Message-ID: All, I recall seeing this before, but I can' find it in a search. The question is how to force numpy to ignore any present blas/lapack and use it's own. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Nov 14 19:31:02 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 14 Nov 2008 18:31:02 -0600 Subject: [Numpy-discussion] How to force build of included LAPACK? In-Reply-To: References: Message-ID: <3d375d730811141631t720e1d04qd3d5a5011392b969@mail.gmail.com> On Fri, Nov 14, 2008 at 16:53, Charles R Harris wrote: > All, > > I recall seeing this before, but I can' find it in a search. The question is > how to force numpy to ignore any present blas/lapack and use it's own. Hmm, tricky. I don't think there is a good way. On OS X, lapack_opt pretty eagerly tries to accept Accelerate.framework. You can fake it out by using ATLAS=1 (but you must not have ATLAS or anything else). On other platforms, you can probably use ATLAS=None (though Lord help you if you have MKL, too). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Fri Nov 14 20:13:40 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 14 Nov 2008 18:13:40 -0700 Subject: [Numpy-discussion] How to force build of included LAPACK? In-Reply-To: <3d375d730811141631t720e1d04qd3d5a5011392b969@mail.gmail.com> References: <3d375d730811141631t720e1d04qd3d5a5011392b969@mail.gmail.com> Message-ID: On Fri, Nov 14, 2008 at 5:31 PM, Robert Kern wrote: > On Fri, Nov 14, 2008 at 16:53, Charles R Harris > wrote: > > All, > > > > I recall seeing this before, but I can' find it in a search. The question > is > > how to force numpy to ignore any present blas/lapack and use it's own. > > Hmm, tricky. I don't think there is a good way. On OS X, lapack_opt > pretty eagerly tries to accept Accelerate.framework. You can fake it > out by using ATLAS=1 (but you must not have ATLAS or anything else). > On other platforms, you can probably use ATLAS=None (though Lord help > you if you have MKL, too). > Might be worth putting a flag in site.cfg just in case someone gets stuck with a broken library that they can't remove. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From slaunger at gmail.com Fri Nov 14 20:22:52 2008 From: slaunger at gmail.com (Kim Hansen) Date: Sat, 15 Nov 2008 02:22:52 +0100 Subject: [Numpy-discussion] How to efficiently reduce a 1-dim 100-10000 element array with user defined binary function Message-ID: Dear numpy-discussion, I am quite new to Python and numpy. I am working on a Python application using Scipy, where I need to unpack and pack quite large amounts of data (GBs) into data structures and convert them into other data structures. Until now the program has been running amazingly efficiently (processing 10-125 MB/s depending on the task) because I have managed to use only built-in array functions for all computationally intensive operations (and beause i have fairly good hardware to run it on). However, I have run into problems now, as I need to compute new checksums for modified data structures using a ones complement add on one-dimensional, uint16 arrayrs with 50-10000 elements. The best procedure I have come up with yet is the following implementation def complement_add_uint16(a, b): """ Returns the complement ones addition of two unsigned 16 bit integers The values are added and if the carry is set, the value is incremented. It is assumed that a and b are both in the range [0; 0xFFFF]. """ c = a + b c += c >> 16 return c & 0xFFFF def complement_add_checksum(ints): """ Return a complement checksum based on a one-dimensional array of dtype=uint16 """ return reduce(complement_add_uint16, ints, 0) This works, but it is very slow as compared to the other operations I have implemented in numpy. I have profiled a typical use case of my application with profile.run(...) The profiler output shows that 88% of the time in the application is spend on the reduce(complement_add_uint16, ints, 0) statement and 95% of that time is spend solely on calls to the binary complement_add_unit16 function. I have elaborate units tests for both functions, so I am not afraid of experimenting, only my experience and numpy knowledge is lacking... Is there a smart way to optimize this procedure, while staying in numpy? I am working on a Windows box, and I do not have a C, C++ or Fortran compiler installed nor am I particularly knowledgeable in programming in these languages (I come from a Java world). It is not that I am lazy, it is just that the thing has to run on a Linux box as well, and I would like to avoid too much with tinkering with platform specific compilers. I have actually already posted a thread about this on comp.lang.python at a time where I still had not felt the full impact of the poor performance: http://groups.google.dk/group/comp.lang.python/browse_thread/thread/2f0e7ee3ad76d5a3/e2eae3719c6e3fe8#e2eae3719c6e3fe8 where I got some advanced advice (among those to visit this forum). It is a little bit over my head though (for the moment), and I was wondering if there is a simpler way? Best wishes, Slaunger From charlesr.harris at gmail.com Fri Nov 14 22:00:49 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 14 Nov 2008 20:00:49 -0700 Subject: [Numpy-discussion] How to efficiently reduce a 1-dim 100-10000 element array with user defined binary function In-Reply-To: References: Message-ID: On Fri, Nov 14, 2008 at 6:22 PM, Kim Hansen wrote: > Dear numpy-discussion, > > I am quite new to Python and numpy. > > I am working on a Python application using Scipy, where I need to > unpack and pack quite large amounts of data (GBs) into data structures > and convert them into other data structures. Until now the program has > been running amazingly efficiently (processing 10-125 MB/s depending > on the task) because I have managed to use only built-in array > functions for all computationally intensive operations (and beause i > have fairly good hardware to run it on). > > However, I have run into problems now, as I need to compute new > checksums for modified data structures using a ones complement add on > one-dimensional, uint16 arrayrs with 50-10000 elements. > > The best procedure I have come up with yet is the following implementation > > def complement_add_uint16(a, b): > """ > Returns the complement ones addition of two unsigned 16 bit integers > > The values are added and if the carry is set, the value is incremented. > > It is assumed that a and b are both in the range [0; 0xFFFF]. > """ > c = a + b > c += c >> 16 > return c & 0xFFFF > > def complement_add_checksum(ints): > """ > Return a complement checksum based on a > one-dimensional array of dtype=uint16 > > """ > return reduce(complement_add_uint16, ints, 0) > Can't you add them up in a higher precision, then do a = (a & 0xffff ) + (a >> 16) until the high order bits are gone? The high order bits will count the number ones you need to add. Something like def complement_add_uint16(x) : y = sum(x, dtype=uint64) while y >= 2**16 : y = (y & uint64(0xffff)) + (y >> uint64(16)) return y You might need to block the data to avoid overflow of uint64, but you get an extra 48 bits in any case. Mind, I'm not sure this is exactly the same thing as what you are trying to do, so it bears checking. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Fri Nov 14 23:40:33 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 14 Nov 2008 23:40:33 -0500 Subject: [Numpy-discussion] slices vs. range() over a certain axis Message-ID: <91E8A661-14A0-4513-AA9E-A8C6E49C17C6@cs.toronto.edu> I'm trying to clarify my understanding of how slicing works and how it differs from specifying a sequence of indices. My question is best illustrated by an example: In [278]: x = zeros((5,50)) In [279]: y = random_integers(5,size=50)-1 The behaviour that I want is produced by: In [280]: x[y,range(50)] = 1 Why doesn't In [281]: x[y,0:50] = 1 produce the same effect? Is there a way to do what I am attempting in [280] with slicing? Thanks, David From robert.kern at gmail.com Sat Nov 15 04:32:14 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 15 Nov 2008 03:32:14 -0600 Subject: [Numpy-discussion] slices vs. range() over a certain axis In-Reply-To: <91E8A661-14A0-4513-AA9E-A8C6E49C17C6@cs.toronto.edu> References: <91E8A661-14A0-4513-AA9E-A8C6E49C17C6@cs.toronto.edu> Message-ID: <3d375d730811150132n31c48127u96cd5e4a0a3f18ba@mail.gmail.com> On Fri, Nov 14, 2008 at 22:40, David Warde-Farley wrote: > I'm trying to clarify my understanding of how slicing works and how it > differs from specifying a sequence of indices. My question is best > illustrated by an example: > > In [278]: x = zeros((5,50)) > > In [279]: y = random_integers(5,size=50)-1 > > The behaviour that I want is produced by: > > In [280]: x[y,range(50)] = 1 > > Why doesn't > > In [281]: x[y,0:50] = 1 > > produce the same effect? Is there a way to do what I am attempting in > [280] with slicing? The reasoning is a bit clearer with __getitem__ rather than __setitem__. When the subscript is only a set of slices, then the resulting array is a view. Slices specify a subarray with homogeneous strides like any other array. Fancy indices don't. The result must, in general, be a copy because we will be pulling items scattered across memory in no necessary order. Fancy indexing needs to have separate semantics. Specifically, broadcast the index arrays then create a new array of the broadcasted shape with elements found by iterating over the broadcasted index arrays. Now the question is, what do we do when we combine the two into one subscript. Instead of reinterpreting the slices as lists and shoving them into the fancy indexing semantics, we leave them as slices. The procedure for fancy indexing changes slightly. We do the same broadcasting *just* for the actual fancy indices. However, the "item" that gets placed into each position in the output array is no longer a scalar, but rather the result of the remaining slices. This gives us more capabilities than interpreting a slice as the equivalent range() since you can always just use the range(). Clear as mud? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From slaunger at gmail.com Sat Nov 15 17:29:21 2008 From: slaunger at gmail.com (Slaunger) Date: Sat, 15 Nov 2008 22:29:21 +0000 (UTC) Subject: [Numpy-discussion] =?utf-8?q?How_to_efficiently_reduce_a_1-dim_10?= =?utf-8?q?0-10000=09element_array_with_user_defined_binary_functio?= =?utf-8?q?n?= References: Message-ID: Charles R Harris gmail.com> writes: > > > On Fri, Nov 14, 2008 at 6:22 PM, Kim Hansen gmail.com> wrote: > Dear numpy-discussion, > I am quite new to Python and numpy. > I am working on a Python application using Scipy, where I need to > unpack and pack quite large amounts of data (GBs) into data structures > and convert them into other data structures. Until now the program has > been running amazingly efficiently (processing 10-125 MB/s depending > on the task) because I have managed to use only built-in array > functions for all computationally intensive operations (and beause i > have fairly good hardware to run it on). > However, I have run into problems now, as I need to compute new > checksums for modified data structures using a ones complement add on > one-dimensional, uint16 arrayrs with 50-10000 elements. > The best procedure I have come up with yet is the following implementation > def complement_add_uint16(a, b): > ? ?""" > ? ?Returns the complement ones addition of two unsigned 16 bit integers > ? ?The values are added and if the carry is set, the value is incremented. > ? ?It is assumed that a and b are both in the range [0; 0xFFFF]. > ? ?""" > ? ?c = a + b > ? ?c += c >> 16 > ? ?return c & 0xFFFF > def complement_add_checksum(ints): > ? ?""" > ? ?Return a complement checksum based on a > ? ?one-dimensional array of dtype=uint16 > ? ?""" > ? ?return reduce(complement_add_uint16, ints, 0) > > > Can't you add them up in a higher precision, then do a = (a & 0xffff ) + (a >> 16) until the high order bits are gone? The high order bits will count the >number ones you need to add. Something like >def complement_add_uint16(x) : > y = sum(x, dtype=uint64)??? > while y >= 2**16 : > y = (y & uint64(0xffff)) + (y >> uint64(16)) > return y >You might need to block the data to avoid overflow of uint64, but you get an >extra 48 bits in any case. Mind, I'm not sure this is exactly the same thing >as what you are trying to do, so it bears checking. Charles, you are great! That does seems to work alright! It passes my quite elaborate unit tests of the checksum function, where I use several real world examples and the performance boost is enourmous! Whereas my previous method never got me any higher processing speeds than 413 kB/s, I am now at, hold on, 47400 kB/s checksum processing speed which is a x 100 performance boost. Once again I see that numpy rules! No need to do any C-inlining! Fantastic! You saved me a lot of trouble there. I had sort of thought of the same idea, but somehow convinced myself that it would not work. -- Slaunger From ezindy at gmail.com Sat Nov 15 22:11:06 2008 From: ezindy at gmail.com (Egor Zindy) Date: Sun, 16 Nov 2008 03:11:06 +0000 Subject: [Numpy-discussion] ANN: numpy.i - added managed deallocation to ARGOUTVIEW_ARRAY1 (ARGOUTVIEWM_ARRAY1) Message-ID: <491F8F4A.30009@gmail.com> Dear List, after I tried to write a simple ARGOUTVIEW_ARRAY1 example (see http://code.google.com/p/ezwidgets/wiki/NumpySWIGMinGW#A_simple_ARGOUTVIEW_ARRAY1_example ), I started wondering about memory deallocation. Turns out a few clever people already did all the thinking (see http://blog.enthought.com/?p=62 ) and a few more clever people use this in a swig file (see http://niftilib.sourceforge.net/pynifti, file nifticlib.i). All this concentrated knowledge helped me implement a single ARGOUTVIEWM_ARRAY1 fragment in numpy.i to do some testing (see attached numpy.i). How to use it? In yourfile.i, the %init function has a few lines added to it: %init %{ import_array(); /* initialize the new Python type for memory deallocation */ _MyDeallocType.tp_new = PyType_GenericNew; if (PyType_Ready(&_MyDeallocType) < 0) return; %} ... and that's it! then just use ARGOUTVIEWM_ARRAY1 instead of ARGOUTVIEW_ARRAY1 and python does the deallocation for you when the python array is destroyed (see the examples attached). Everything compiles, but it would be nice to get rid of the warnings... ezalloc_wrap.c:2763: warning: initialization from incompatible pointer type ezalloc_wrap.c: In function `_wrap_alloc_managed': ezalloc_wrap.c:2844: warning: assignment from incompatible pointer type writing build\temp.win32-2.5\Release\_ezalloc.def Compilation: python setup_alloc.py build Testing: The attached test_alloc.py does 2048 allocations of 1MB each for managed and unmanaged arrays. Output on my XP laptop with 1GB RAM as follows: ARGOUTVIEWM_ARRAY1 (managed arrays) - 2048 allocations (1048576 bytes each) Done! ARGOUTVIEW_ARRAY1 (unmanaged, leaking) - 2048 allocations (1048576 bytes each) Step 482 failed TODO: Find a better name for the methods (not sure I like ARGOUTVIEWM_ARRAY), then do the missing fragments (2D arrays), clear the warnings and verify the method for a possible inclusion in the official numpy.i file. Thank you for reading! Regards, Egor -------------- next part -------------- A non-text attachment was scrubbed... Name: ezalloc.zip Type: application/x-zip-compressed Size: 9133 bytes Desc: not available URL: From millman at berkeley.edu Sun Nov 16 04:17:41 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 16 Nov 2008 01:17:41 -0800 Subject: [Numpy-discussion] added nep for generalized ufuncs Message-ID: http://projects.scipy.org/scipy/numpy/browser/trunk/doc/neps/generalized-ufuncs.rst Please feel free to add to this or improve it as you see fit. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From Jason.Woolard at noaa.gov Sun Nov 16 17:15:51 2008 From: Jason.Woolard at noaa.gov (Jason Woolard) Date: Sun, 16 Nov 2008 17:15:51 -0500 Subject: [Numpy-discussion] Indexing in Numpy vs. IDL? Message-ID: <49209B97.5090609@noaa.gov> hi all, I'm fairly new to Numpy and I've been trying to port over some IDL code to become more familiar. I've been moderately successful with numpy.where and numpy.compress to do some of things that were pretty easy to do in IDL. I'm a bit confused about how the indexing of arrays works though. This is pretty straightforward: in IDL ============================= data = [50.00, 100.00, 150.00, 200.00, 250.00, 300.00, 350.00] index = WHERE((data GT 100.00) AND (data LT 300.00)) new_data = data[index] print, new_data 150.000 200.000 250.000 in Python ============================== >>> import numpy >>> from numpy import * >>> data = [50.00, 100.00, 150.00, 200.00, 250.00, 300.00, 350.00] >>> data = array(data, dtype=float32) #Convert list to array >>> index_mask = numpy.where((data > 100.00) & (data < 300.00), 1,0) #Test for the condition. >>> index_one = numpy.compress(index_mask, data) >>> print index_one [ 150. 200. 250.] But I'm having a bit of trouble with the Python equivalent of this: in IDL: ============================= index_two = WHERE ((data[index_one] GT bottom) AND (data[index_one] LE top) and also this: result = MAX(data[index_one[index_two]]) From what I've read it looks like numpy.take() might work to do the indexing. I've tried to test this but I'm not getting the answers I'd expect. Am I overlooking something obvious here? Thanks in advance for any responses. From robert.kern at gmail.com Sun Nov 16 18:30:53 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 16 Nov 2008 17:30:53 -0600 Subject: [Numpy-discussion] Indexing in Numpy vs. IDL? In-Reply-To: <49209B97.5090609@noaa.gov> References: <49209B97.5090609@noaa.gov> Message-ID: <3d375d730811161530p1ea9e6d2w791e22ff0c3829be@mail.gmail.com> On Sun, Nov 16, 2008 at 16:15, Jason Woolard wrote: > hi all, > > I'm fairly new to Numpy and I've been trying to port over some IDL code > to become more familiar. I've been moderately successful with > numpy.where and numpy.compress to do some of things that were pretty > easy to do in IDL. I'm a bit confused about how the indexing of arrays > works though. You may want to look at this section of the reference manual: http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html > This is pretty straightforward: > > in IDL > ============================= > data = [50.00, 100.00, 150.00, 200.00, 250.00, 300.00, 350.00] > index = WHERE((data GT 100.00) AND (data LT 300.00)) > new_data = data[index] > print, new_data > > 150.000 200.000 250.000 > > in Python > ============================== > >>> import numpy > >>> from numpy import * > >>> data = [50.00, 100.00, 150.00, 200.00, 250.00, 300.00, 350.00] > >>> data = array(data, dtype=float32) #Convert list to array > >>> index_mask = numpy.where((data > 100.00) & (data < 300.00), 1,0) index_mask = (data > 100.) & (data < 300.0) > #Test for the condition. > >>> index_one = numpy.compress(index_mask, data) index_one = data[index_mask] > >>> print index_one > [ 150. 200. 250.] Note that this is not an index; this is equivalent to new_data in your IDL example. Are you sure you wanted to call this index_one? > But I'm having a bit of trouble with the Python equivalent of this: > > in IDL: > ============================= > index_two = WHERE ((data[index_one] GT bottom) AND (data[index_one] LE > top) Well, since index_one are not indices, then data[index_one] doesn't mean anything. Can you show us the equivalent IDL that generates index_one? > and also this: > > result = MAX(data[index_one[index_two]]) Ditto. > From what I've read it looks like numpy.take() might work to do the > indexing. I've tried to test this but I'm not getting the answers I'd > expect. Am I overlooking something obvious here? So-called "advanced indexing" where the indices are boolean or integer arrays will probably solve your problem, but we need more information on what you mean by index_one. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Nov 16 18:33:02 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 16 Nov 2008 17:33:02 -0600 Subject: [Numpy-discussion] Indexing in Numpy vs. IDL? In-Reply-To: <3d375d730811161530p1ea9e6d2w791e22ff0c3829be@mail.gmail.com> References: <49209B97.5090609@noaa.gov> <3d375d730811161530p1ea9e6d2w791e22ff0c3829be@mail.gmail.com> Message-ID: <3d375d730811161533k787533e2ib24857477520ed6f@mail.gmail.com> On Sun, Nov 16, 2008 at 17:30, Robert Kern wrote: > So-called "advanced indexing" where the indices are boolean or integer > arrays will probably solve your problem, but we need more information > on what you mean by index_one. Sorry, I left out a sentence or two here. I also meant to say that take() and compress() are legacy functions mostly subsumed by advanced indexing. Also, instead of doing things like where(some_boolean_array, 1, 0), just use some_boolean_array. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ezindy at gmail.com Sun Nov 16 19:11:40 2008 From: ezindy at gmail.com (Egor Zindy) Date: Mon, 17 Nov 2008 00:11:40 +0000 Subject: [Numpy-discussion] ANN: numpy.i - added managed deallocation to ARGOUTVIEW_ARRAY1 (ARGOUTVIEWM_ARRAY1) In-Reply-To: <491F8F4A.30009@gmail.com> References: <491F8F4A.30009@gmail.com> Message-ID: <4920B6BC.9000206@gmail.com> Sorry to reply to myself. I've managed to get rid of the two warnings I was talking about, added proper attribution to the method (thank you Travis Oliphant!) and am now raising an exception in the swig wrapper of my example code when running out of memory (although using arg1,arg2,arg3 in %exception feels to me a bit hack-ish to me). Oh, and another detail: since I'm allocating memory for int arrays, the memory allocated in bytes is of course 4x what I initially wrote it was (in test_alloc.py, on my 32 bit machine)! Anyway, made a new zip with my example. As before, there is only one fragment available: ARGOUTVIEWM_ARRAY1, the other fragments still need to be written (I guess when the ARGOUTVIEWM_ARRAY1 fragment is validated). That's all I can think of. As usual, comments welcome! Regards, Egor Egor Zindy wrote: > Dear List, > > after I tried to write a simple ARGOUTVIEW_ARRAY1 example (see > http://code.google.com/p/ezwidgets/wiki/NumpySWIGMinGW#A_simple_ARGOUTVIEW_ARRAY1_example > ), I started wondering about memory deallocation. Turns out a few > clever people already did all the thinking (see > http://blog.enthought.com/?p=62 ) and a few more clever people use > this in a swig file (see http://niftilib.sourceforge.net/pynifti, file > nifticlib.i). > > All this concentrated knowledge helped me implement a single > ARGOUTVIEWM_ARRAY1 fragment in numpy.i to do some testing (see > attached numpy.i). > > How to use it? In yourfile.i, the %init function has a few lines added > to it: > > %init %{ > import_array(); > > /* initialize the new Python type for memory deallocation */ > _MyDeallocType.tp_new = PyType_GenericNew; > if (PyType_Ready(&_MyDeallocType) < 0) > return; %} > > ... and that's it! then just use ARGOUTVIEWM_ARRAY1 instead of > ARGOUTVIEW_ARRAY1 and python does the deallocation for you when the > python array is destroyed (see the examples attached). > > Everything compiles, but it would be nice to get rid of the warnings... > > ezalloc_wrap.c:2763: warning: initialization from incompatible pointer > type > ezalloc_wrap.c: In function `_wrap_alloc_managed': > ezalloc_wrap.c:2844: warning: assignment from incompatible pointer type > writing build\temp.win32-2.5\Release\_ezalloc.def > > Compilation: > python setup_alloc.py build > > Testing: > The attached test_alloc.py does 2048 allocations of 1MB each for > managed and unmanaged arrays. Output on my XP laptop with 1GB RAM as > follows: > > ARGOUTVIEWM_ARRAY1 (managed arrays) - 2048 allocations (1048576 bytes > each) > Done! > > ARGOUTVIEW_ARRAY1 (unmanaged, leaking) - 2048 allocations (1048576 > bytes each) > Step 482 failed > > > TODO: > Find a better name for the methods (not sure I like > ARGOUTVIEWM_ARRAY), then do the missing fragments (2D arrays), clear > the warnings and verify the method for a possible inclusion in the > official numpy.i file. > > Thank you for reading! > > Regards, > Egor > -------------- next part -------------- A non-text attachment was scrubbed... Name: ezalloc.zip Type: application/x-zip-compressed Size: 9411 bytes Desc: not available URL: From charlesr.harris at gmail.com Sun Nov 16 23:01:23 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 16 Nov 2008 21:01:23 -0700 Subject: [Numpy-discussion] Breaking up the umathmodule file. Message-ID: Hi All, I thought I should run this by folks before doing the deed. I propose to break up the umathmodule file. Currently the umath module is put together from the following files: math_c99.inc.src ufuncobject.c umathmodule.c.src I propose: umath_funcs_c99.inc.src umath_funcs.inc.src umath_loops.inc.src umath_object.inc umathmodule.c This will break up the source of the umath objects into functional parts and allow umathmodule.c to function as a template with all the includes that put the sources together in one spot. This should make it easier for people working on this code in the future to understand how it all goes together and where things are. Alternatives to this proposal might be merging umath_funcs_c99 with umath_funcs, and/or, instead of using names like xxx.inc.src, use xxx.c.src or xxx.c.inc.src. Thoughts? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From catherine.m.moroney at jpl.nasa.gov Mon Nov 17 00:37:27 2008 From: catherine.m.moroney at jpl.nasa.gov (Catherine Moroney) Date: Sun, 16 Nov 2008 21:37:27 -0800 Subject: [Numpy-discussion] difficulties with numpy.tofile() Message-ID: <851092E9-6D2F-4504-8F53-471F0F0BC818@jpl.nasa.gov> Hello, I'm having problems writing a 2-dimensional numpy array out to a binary file using the "tofile" method. The call to "tofile" appears to succeed, but when I check the length of the binary file I find that it's longer than what is expected, given the calculation of nrows*ncolumns*nbytes. The array that I'm writing out to the file is the result of running kmeans2 (from scipy), so I'm expecting that the datatype is numpy.int32. Is there an easy way to check what the datatype of a numpy array is? Yet, the file that results from the call: data.tofile('./data1.bin','int32') is longer than it should be. Using "float32" as the datatype results in an even larger file, so it's still longer than expected. Does the "tofile" call insert some extra bytes at the beginning or end of the binary file? What is the best way of dumping a large numpy array to a binary file that can then be read in from Fortran? I'm running python 2.5, numpy 1.2.1 on a Linux box with Fedora Core 7. Catherine From robert.kern at gmail.com Mon Nov 17 00:47:05 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 16 Nov 2008 23:47:05 -0600 Subject: [Numpy-discussion] difficulties with numpy.tofile() In-Reply-To: <851092E9-6D2F-4504-8F53-471F0F0BC818@jpl.nasa.gov> References: <851092E9-6D2F-4504-8F53-471F0F0BC818@jpl.nasa.gov> Message-ID: <3d375d730811162147p2e922748x7cdc40035602777d@mail.gmail.com> On Sun, Nov 16, 2008 at 23:37, Catherine Moroney wrote: > Hello, > > I'm having problems writing a 2-dimensional numpy array out to a binary > file using the "tofile" method. The call to "tofile" appears to > succeed, > but when I check the length of the binary file I find that it's longer > than what is expected, given the calculation of nrows*ncolumns*nbytes. > > The array that I'm writing out to the file is the result of running > kmeans2 (from scipy), so I'm expecting that the datatype is numpy.int32. > Is there an easy way to check what the datatype of a numpy array is? some_array.dtype > Yet, the file that results from the call: > data.tofile('./data1.bin','int32') is longer than it should be. > > Using "float32" as the datatype results in an even larger file, so it's > still longer than expected. That argument is not the datatype. If you provide that argument, it specifies that you are outputting ASCII, not binary, and are using that value as the separator. Just use the filename and nothing else. If you need the data as a different type, coerce the array using the .astype() method. http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.tofile.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Mon Nov 17 00:47:14 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 16 Nov 2008 22:47:14 -0700 Subject: [Numpy-discussion] difficulties with numpy.tofile() In-Reply-To: <851092E9-6D2F-4504-8F53-471F0F0BC818@jpl.nasa.gov> References: <851092E9-6D2F-4504-8F53-471F0F0BC818@jpl.nasa.gov> Message-ID: On Sun, Nov 16, 2008 at 10:37 PM, Catherine Moroney < catherine.m.moroney at jpl.nasa.gov> wrote: > Hello, > > I'm having problems writing a 2-dimensional numpy array out to a binary > file using the "tofile" method. The call to "tofile" appears to > succeed, > but when I check the length of the binary file I find that it's longer > than what is expected, given the calculation of nrows*ncolumns*nbytes. > You might find the following attributes useful in debugging this. In [1]: x = ones((2,2)) In [2]: x.size Out[2]: 4 In [3]: x.itemsize Out[3]: 8 In [4]: x.nbytes Out[4]: 32 In [5]: x.dtype Out[5]: dtype('float64') In [6]: x.tofile('foo.dat') In [7]: ls -l foo.dat -rw-rw-r-- 1 charris charris 32 2008-11-16 22:43 foo.dat The file should have the same size as x.nbytes. What operating system are you using? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.sinclair.za at gmail.com Mon Nov 17 07:22:56 2008 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Mon, 17 Nov 2008 14:22:56 +0200 Subject: [Numpy-discussion] r6056 - branches/visualstudio_manifest/numpy/distutils/command - Minor Typo Message-ID: <6a17e9ee0811170422r2920c013sc8557493c5f7be99@mail.gmail.com> 2008/11/17 : > Author: cdavid > Date: 2008-11-17 06:03:57 -0600 (Mon, 17 Nov 2008) > New Revision: 6056 > > Modified: > branches/visualstudio_manifest/numpy/distutils/command/config.py > Log: > Detect whether config link needs embedding the manifest for the MSVC runtime. > > Modified: branches/visualstudio_manifest/numpy/distutils/command/config.py > =================================================================== > --- branches/visualstudio_manifest/numpy/distutils/command/config.py 2008-11-17 07:00:42 UTC (rev 6055) > +++ branches/visualstudio_manifest/numpy/distutils/command/config.py 2008-11-17 12:03:57 UTC (rev 6056) > @@ -110,6 +113,21 @@ > if fileexists: continue > log.warn('could not find library %r in directories %s' \ > % (libname, library_dirs)) > + elif self.compiler.compiler_type == 'mingw32': > + msver = get_build_msvc_version() > + if msver is not None: > + if msver >= 8: > + # check msvcr major version are the same for linking and > + # embedding > + msvcv = msvc_runtime_library() > + if msvcv: > + maj = msvcv[5:6] > + if not maj == int(msver): > + raise ValueError, > + "Dyscrepancy between linked msvcr " \ > + "(%f) and the one about to be embedded " \ > + "(%f)" % (int(msver), maj) > + > return self._wrap_method(old_config._link,lang, > (body, headers, include_dirs, > libraries, library_dirs, lang)) Hi, I don't want to be picky, but my pedantic nature has gotten the better of me :) The section of the message in the ValueError exception reading "Dyscrepancy between linked msvcr " should read "Discrepancy between linked msvcr ". I think it's worth fixing if it's going to be seen by users. Cheers, Scott From david at ar.media.kyoto-u.ac.jp Mon Nov 17 07:17:54 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 17 Nov 2008 21:17:54 +0900 Subject: [Numpy-discussion] r6056 - branches/visualstudio_manifest/numpy/distutils/command - Minor Typo In-Reply-To: <6a17e9ee0811170422r2920c013sc8557493c5f7be99@mail.gmail.com> References: <6a17e9ee0811170422r2920c013sc8557493c5f7be99@mail.gmail.com> Message-ID: <492160F2.5010109@ar.media.kyoto-u.ac.jp> Scott Sinclair wrote: > > The section of the message in the ValueError exception reading > "Dyscrepancy between linked msvcr " should read "Discrepancy between > linked msvcr ". I think it's worth fixing if it's going to be seen by > users. > Yep, you're right. I can't even use the non-native speaker excuse, since this is a word coming from Latin, thanks, David From david at ar.media.kyoto-u.ac.jp Mon Nov 17 08:56:58 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 17 Nov 2008 22:56:58 +0900 Subject: [Numpy-discussion] numpy and python 2.6 on windows: please test Message-ID: <4921782A.8010704@ar.media.kyoto-u.ac.jp> Hi, I think I finally solve most serious issues for numpy on python 2.6 on windows. I would appreciate if people would test building numpy (trunk); in particular since some issues are moderately complex and system dependent, I am interested in the following configurations: - python installed for a single user (that is is installed as non admin), particularly on VISTA, build with mingw - mingw builds on XP and VISTA with python installed globally (e.g. in C:\Python26). Builds with VS 2008 are less interesting for me (e.g. it should work). The actual test suite results are not that interesting either at this point (I see 3 failures and one error ATM on XP). Basically, if you can build and import numpy at all, it is good :) thanks, David From paulprobert at sbcglobal.net Sun Nov 16 22:54:42 2008 From: paulprobert at sbcglobal.net (Paul Probert) Date: Sun, 16 Nov 2008 21:54:42 -0600 Subject: [Numpy-discussion] Indexing in Numpy vs. IDL? In-Reply-To: <49209B97.5090609@noaa.gov> References: <49209B97.5090609@noaa.gov> Message-ID: Jason Woolard wrote: > hi all, > > I'm fairly new to Numpy and I've been trying to port over some IDL code > to become more familiar. I've been moderately successful with > numpy.where and numpy.compress to do some of things that were pretty > easy to do in IDL. I'm a bit confused about how the indexing of arrays > works though. > > This is pretty straightforward: > > in IDL > ============================= > data = [50.00, 100.00, 150.00, 200.00, 250.00, 300.00, 350.00] > index = WHERE((data GT 100.00) AND (data LT 300.00)) > new_data = data[index] > print, new_data > > 150.000 200.000 250.000 > > in Python > ============================== > >>> import numpy > >>> from numpy import * > >>> data = [50.00, 100.00, 150.00, 200.00, 250.00, 300.00, 350.00] > >>> data = array(data, dtype=float32) #Convert list to array > >>> index_mask = numpy.where((data > 100.00) & (data < 300.00), 1,0) > #Test for the condition. > >>> index_one = numpy.compress(index_mask, data) > >>> print index_one > [ 150. 200. 250.] > > > But I'm having a bit of trouble with the Python equivalent of this: > > in IDL: > ============================= > index_two = WHERE ((data[index_one] GT bottom) AND (data[index_one] LE > top) > > and also this: > > result = MAX(data[index_one[index_two]]) > > From what I've read it looks like numpy.take() might work to do the > indexing. I've tried to test this but I'm not getting the answers I'd > expect. Am I overlooking something obvious here? > > Thanks in advance for any responses. Hi Jason, I too am a former IDLer. There is a slight paradigm shift here. In IDL you can index an array with another array of integer indices, and you can do that too in numpy. But numpy also lets you index an array with an array of booleans. So data > 100. creates an array of booleans the same size and shape as data, so you can write your new array as data[ (data > 100.) & (data < 300.) ] Note we don't use a "where" function. In numpy, "where" is a completely different thing than in IDL. If you really wanted to generate a list of indices, you can use the "nonzero" method, but the numpy book says this isn't as fast as boolean indexing. HTH, Paul Probert From soren.skou.nielsen at gmail.com Mon Nov 17 09:42:39 2008 From: soren.skou.nielsen at gmail.com (=?ISO-8859-1?Q?S=F8ren_Nielsen?=) Date: Mon, 17 Nov 2008 15:42:39 +0100 Subject: [Numpy-discussion] Weave Ext_tools converters not working?? Message-ID: Can anyone explain why this fails? This piece of code runs perfectly using weave.inline and type_converters = blitz.. Obviously it can't handle 2D arrays anymore. It's just a stupid example to illustrate that. Thanks, Soren CODE : ------------------------------------------------------------------------------------------------ mod = ext_tools.ext_module('ravg_ext') test = zeros((5,5)) xlen = 5 ylen = 5 code = """ int x, y; for( x = 0; x < xlen; x++) { for( y = 0; y < ylen; y++) { test(x,y) = 2; } } """ ravg = ext_tools.ext_function('ravg', code, ['xlen', 'ylen', 'test']) mod.add_function(ravg) mod.compile(compiler = 'gcc') RESULT: ------------------------------------------------------------------------------------------------ C:\Temp\ravg_ext.cpp: In function `PyObject* ravg(PyObject*, PyObject*, PyObject*)': C:\Temp\ravg_ext.cpp:654: error: `test' cannot be used as a function C:\Temp\ravg_ext.cpp:641: warning: unused variable 'Ntest' C:\Temp\ravg_ext.cpp:642: warning: unused variable 'Stest' C:\Temp\ravg_ext.cpp:643: warning: unused variable 'Dtest' Traceback (most recent call last): File "C:\Temp\ravg_extension.py", line 132, in ? build_ravg_extension() File "C:\Temp\ravg_extension.py", line 125, in build_ravg_extension mod.compile(compiler = 'gcc') File "C:\Python24\Lib\site-packages\scipy\weave\ext_tools.py", line 365, in compile verbose = verbose, **kw) File "C:\Python24\Lib\site-packages\scipy\weave\build_tools.py", line 269, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "C:\Python24\Lib\site-packages\numpy\distutils\core.py", line 184, in setup return old_setup(**new_attr) File "C:\Python24\Lib\distutils\core.py", line 166, in setup raise SystemExit, "error: " + str(msg) CompileError: error: Command "g++ -mno-cygwin -O2 -Wall -IC:\Python24\lib\site-packages\scipy\weave -IC:\Python24\lib\site-packages\scipy\weave\scxx -IC:\Python24\lib\site-packages\numpy\core\include -IC:\Python24\include -IC:\Python24\PC -c C:\Temp\ravg_ext.cpp -o c:\docume~1\ssn\locals~1\temp\ssn\python24_intermediate\compiler_894ad5ed761bb51736c6d2b7872dc212\Releas -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadavh at visionsense.com Mon Nov 17 13:09:47 2008 From: nadavh at visionsense.com (Nadav Horesh) Date: Mon, 17 Nov 2008 20:09:47 +0200 Subject: [Numpy-discussion] Weave Ext_tools converters not working?? In-Reply-To: References: Message-ID: <1226945387.29114.1.camel@nadav.envision.co.il> On Mon, 2008-11-17 at 15:42 +0100, S?ren Nielsen wrote: why test(x,y) = 2; and not test[x][y] = 2; ? Nadav > Can anyone explain why this fails? This piece of code runs perfectly > using weave.inline and type_converters = blitz.. > > Obviously it can't handle 2D arrays anymore. It's just a stupid > example to illustrate that. > > Thanks, > Soren > > CODE : > ------------------------------------------------------------------------------------------------ > > mod = ext_tools.ext_module('ravg_ext') > > test = zeros((5,5)) > xlen = 5 > ylen = 5 > > code = """ > int x, y; > > for( x = 0; x < xlen; x++) > { > for( y = 0; y < ylen; y++) > { > test(x,y) = 2; > } > } > > """ > > ravg = ext_tools.ext_function('ravg', code, ['xlen', 'ylen', > 'test']) > mod.add_function(ravg) > mod.compile(compiler = 'gcc') > > RESULT: > ------------------------------------------------------------------------------------------------ > C:\Temp\ravg_ext.cpp: In function `PyObject* ravg(PyObject*, > PyObject*, PyObject*)': > C:\Temp\ravg_ext.cpp:654: error: `test' cannot be used as a function > C:\Temp\ravg_ext.cpp:641: warning: unused variable 'Ntest' > C:\Temp\ravg_ext.cpp:642: warning: unused variable 'Stest' > C:\Temp\ravg_ext.cpp:643: warning: unused variable 'Dtest' > > Traceback (most recent call last): > File "C:\Temp\ravg_extension.py", line 132, in ? > build_ravg_extension() > File "C:\Temp\ravg_extension.py", line 125, in build_ravg_extension > mod.compile(compiler = 'gcc') > File "C:\Python24\Lib\site-packages\scipy\weave\ext_tools.py", line > 365, in compile > verbose = verbose, **kw) > File "C:\Python24\Lib\site-packages\scipy\weave\build_tools.py", > line 269, in build_extension > setup(name = module_name, ext_modules = [ext],verbose=verb) > File "C:\Python24\Lib\site-packages\numpy\distutils\core.py", line > 184, in setup > return old_setup(**new_attr) > File "C:\Python24\Lib\distutils\core.py", line 166, in setup > raise SystemExit, "error: " + str(msg) > CompileError: error: Command "g++ -mno-cygwin -O2 -Wall -IC:\Python24 > \lib\site-packages\scipy\weave -IC:\Python24\lib\site-packages\scipy > \weave\scxx -IC:\Python24\lib\site-packages\numpy\core\include -IC: > \Python24\include -IC:\Python24\PC -c C:\Temp\ravg_ext.cpp -o c: > \docume~1\ssn\locals~1\temp\ssn\python24_intermediate > \compiler_894ad5ed761bb51736c6d2b7872dc212\Releas > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From soren.skou.nielsen at gmail.com Mon Nov 17 11:31:55 2008 From: soren.skou.nielsen at gmail.com (=?ISO-8859-1?Q?S=F8ren_Nielsen?=) Date: Mon, 17 Nov 2008 17:31:55 +0100 Subject: [Numpy-discussion] Weave Ext_tools converters not working?? In-Reply-To: <1226945387.29114.1.camel@nadav.envision.co.il> References: <1226945387.29114.1.camel@nadav.envision.co.il> Message-ID: because it's written as C code using the blitz type converter.. I found the answer to my own problem. I added type_converters = converters.blitz in the ext_functions function call. On Mon, Nov 17, 2008 at 7:09 PM, Nadav Horesh wrote: > On Mon, 2008-11-17 at 15:42 +0100, S?ren Nielsen wrote: > why > test(x,y) = 2; > and not > test[x][y] = 2; > ? > > Nadav > > Can anyone explain why this fails? This piece of code runs perfectly using > weave.inline and type_converters = blitz.. > > Obviously it can't handle 2D arrays anymore. It's just a stupid example to > illustrate that. > > Thanks, > Soren > > CODE : > ------------------------------------------------------------------------------------------------ > > > mod = ext_tools.ext_module('ravg_ext') > > test = zeros((5,5)) > xlen = 5 > ylen = 5 > > code = """ > int x, y; > > for( x = 0; x < xlen; x++) > { > for( y = 0; y < ylen; y++) > { > test(x,y) = 2; > } > } > > """ > > ravg = ext_tools.ext_function('ravg', code, ['xlen', 'ylen', 'test']) > mod.add_function(ravg) > mod.compile(compiler = 'gcc') > > RESULT: > > ------------------------------------------------------------------------------------------------ > C:\Temp\ravg_ext.cpp: In function `PyObject* ravg(PyObject*, PyObject*, > PyObject*)': > C:\Temp\ravg_ext.cpp:654: error: `test' cannot be used as a function > C:\Temp\ravg_ext.cpp:641: warning: unused variable 'Ntest' > C:\Temp\ravg_ext.cpp:642: warning: unused variable 'Stest' > C:\Temp\ravg_ext.cpp:643: warning: unused variable 'Dtest' > > Traceback (most recent call last): > File "C:\Temp\ravg_extension.py", line 132, in ? > build_ravg_extension() > File "C:\Temp\ravg_extension.py", line 125, in build_ravg_extension > mod.compile(compiler = 'gcc') > File "C:\Python24\Lib\site-packages\scipy\weave\ext_tools.py", line 365, > in compile > verbose = verbose, **kw) > File "C:\Python24\Lib\site-packages\scipy\weave\build_tools.py", line > 269, in build_extension > setup(name = module_name, ext_modules = [ext],verbose=verb) > File "C:\Python24\Lib\site-packages\numpy\distutils\core.py", line 184, > in setup > return old_setup(**new_attr) > File "C:\Python24\Lib\distutils\core.py", line 166, in setup > raise SystemExit, "error: " + str(msg) > CompileError: error: Command "g++ -mno-cygwin -O2 -Wall > -IC:\Python24\lib\site-packages\scipy\weave > -IC:\Python24\lib\site-packages\scipy\weave\scxx > -IC:\Python24\lib\site-packages\numpy\core\include -IC:\Python24\include > -IC:\Python24\PC -c C:\Temp\ravg_ext.cpp -o > c:\docume~1\ssn\locals~1\temp\ssn\python24_intermediate\compiler_894ad5ed761bb51736c6d2b7872dc212\Releas > > > _______________________________________________ > Numpy-discussion mailing listNumpy-discussion at scipy.orghttp://projects.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Mon Nov 17 11:42:26 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Mon, 17 Nov 2008 10:42:26 -0600 Subject: [Numpy-discussion] Breaking up the umathmodule file. In-Reply-To: References: Message-ID: <49219EF2.9060007@enthought.com> Charles R Harris wrote: > Hi All, > > > > I propose: > > umath_funcs_c99.inc.src > umath_funcs.inc.src > umath_loops.inc.src > umath_object.inc > umathmodule.c This sounds fine to me. -Travis From ondrej at certik.cz Mon Nov 17 12:05:29 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Mon, 17 Nov 2008 18:05:29 +0100 Subject: [Numpy-discussion] python-numpy: memory leak in exponentiation Message-ID: <85b5c3130811170905i1d3eeda5t8b1a3683ab18e360@mail.gmail.com> Hi, the details including a test script are at: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=505999 Thanks, Ondrej From charlesr.harris at gmail.com Mon Nov 17 12:09:15 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 17 Nov 2008 10:09:15 -0700 Subject: [Numpy-discussion] r6056 - branches/visualstudio_manifest/numpy/distutils/command - Minor Typo In-Reply-To: <492160F2.5010109@ar.media.kyoto-u.ac.jp> References: <6a17e9ee0811170422r2920c013sc8557493c5f7be99@mail.gmail.com> <492160F2.5010109@ar.media.kyoto-u.ac.jp> Message-ID: On Mon, Nov 17, 2008 at 5:17 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Scott Sinclair wrote: > > > > The section of the message in the ValueError exception reading > > "Dyscrepancy between linked msvcr " should read "Discrepancy between > > linked msvcr ". I think it's worth fixing if it's going to be seen by > > users. > > > > Yep, you're right. I can't even use the non-native speaker excuse, since > this is a word coming from Latin, > But dyscrepancy has an interesting flavor to it with overtones of dysfunctional. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorgen.stenarson at bostream.nu Mon Nov 17 13:04:15 2008 From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=) Date: Mon, 17 Nov 2008 19:04:15 +0100 Subject: [Numpy-discussion] numpy and python 2.6 on windows: please test In-Reply-To: <4921782A.8010704@ar.media.kyoto-u.ac.jp> References: <4921782A.8010704@ar.media.kyoto-u.ac.jp> Message-ID: <4921B21F.6090703@bostream.nu> Hi, this is really good news, thanks for working on this. I'm looking forward to switching from 2.4 to 2.6 at work but we really need numpy and matplotlib. I have successfully built numpy on windows using mingw. Some basic tests at the interactive prompt suggests it works. /J?rgen David Cournapeau skrev: > Hi, > > I think I finally solve most serious issues for numpy on python 2.6 > on windows. I would appreciate if people would test building numpy > (trunk); in particular since some issues are moderately complex and > system dependent, I am interested in the following configurations: > - python installed for a single user (that is is installed as non > admin), particularly on VISTA, build with mingw > - mingw builds on XP and VISTA with python installed globally (e.g. > in C:\Python26). > > Builds with VS 2008 are less interesting for me (e.g. it should work). > The actual test suite results are not that interesting either at this > point (I see 3 failures and one error ATM on XP). Basically, if you can > build and import numpy at all, it is good :) > > thanks, > > David > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From timmichelsen at gmx-topmail.de Mon Nov 17 15:42:08 2008 From: timmichelsen at gmx-topmail.de (Timmie) Date: Mon, 17 Nov 2008 20:42:08 +0000 (UTC) Subject: [Numpy-discussion] =?utf-8?q?difference_between_ma=2Emasked_and_m?= =?utf-8?b?YS5tYXNrZWRfYXJyYXk/?= Message-ID: Hello, what is the difference between ma.masked and ma.masked_array? I am using this expression along with the scikit.timeseries: series[(series.years>2000)&(series.years<2010)]=np.ma.masked => but now, my series does not get masked. Rather the selected values (left part of the '=') get set to 0.0. But if I use arr_masked = \ ma.masked_array(series, series[(series.years>2000)&(series.years<2010)]) the masked is applied sucessfully. This is very strange. for some examples it works to use the firt method, for others not. Any hint is appreciated. Thanks in advance, Timmie From scott.sinclair.za at gmail.com Tue Nov 18 00:39:43 2008 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Tue, 18 Nov 2008 07:39:43 +0200 Subject: [Numpy-discussion] difference between ma.masked and ma.masked_array? In-Reply-To: References: Message-ID: <6a17e9ee0811172139w4bf45167q6e925263b9ee2a3a@mail.gmail.com> 2008/11/17 Timmie : > what is the difference between ma.masked and ma.masked_array? I don't know the answer to that question, but both appear to be some kind of alias for the MaskedArray class and not part of the API. > I am using this expression along with the scikit.timeseries: > > series[(series.years>2000)&(series.years<2010)]=np.ma.masked > > => but now, my series does not get masked. > > Rather the selected values (left part of the '=') get set to 0.0. > > But if I use > > arr_masked = \ > ma.masked_array(series, series[(series.years>2000)&(series.years<2010)]) > > the masked is applied sucessfully. I think you should get the effect you want using masked_where >>> import numpy.ma as ma >>> masked_series = ma.masked_where((series.years>2000) & (series.years<2010), series) There's some in progress documentation in the Numpy doc app at http://docs.scipy.org/numpy/docs/numpy.ma.core.masked_where/ that hasn't yet made it's way to the reference manual http://docs.scipy.org/doc/numpy/reference/ Cheers, Scott From nicolas.roux at st.com Tue Nov 18 05:29:43 2008 From: nicolas.roux at st.com (Nicolas ROUX) Date: Tue, 18 Nov 2008 11:29:43 +0100 Subject: [Numpy-discussion] Getting indices from numpy array with condition In-Reply-To: <1226090856.5176.0.camel@localhost> Message-ID: <000b01c94968$92766970$e7ad810a@gnb.st.com> Hi, Maybe this is not so clever, but I can't find it in the doc. I need to get all indices/index of all occurrences of a value in a numpy array As example: a = numpy.array([1,2,3],[4,5,6],[7,8,9]) I need to get the indice/index of all array elements where a[a>3] Any fast/easy way to write this ? Thanks for your help ;-) Cheers, Nicolas. From faltet at pytables.org Tue Nov 18 05:39:53 2008 From: faltet at pytables.org (Francesc Alted) Date: Tue, 18 Nov 2008 11:39:53 +0100 Subject: [Numpy-discussion] Getting indices from numpy array with condition In-Reply-To: <000b01c94968$92766970$e7ad810a@gnb.st.com> References: <000b01c94968$92766970$e7ad810a@gnb.st.com> Message-ID: <200811181139.53678.faltet@pytables.org> A Tuesday 18 November 2008, Nicolas ROUX escrigu?: > Hi, > > Maybe this is not so clever, but I can't find it in the doc. > I need to get all indices/index of all occurrences of a value in a > numpy array > > > As example: > > a = numpy.array([1,2,3],[4,5,6],[7,8,9]) > I need to get the indice/index of all array elements where a[a>3] > > Any fast/easy way to write this ? Perhaps the nonzero() method is what you are looking for: In [9]: (a > 3).nonzero() Out[9]: (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) Cheers, -- Francesc Alted From faltet at pytables.org Tue Nov 18 05:41:06 2008 From: faltet at pytables.org (Francesc Alted) Date: Tue, 18 Nov 2008 11:41:06 +0100 Subject: [Numpy-discussion] ANN: PyTables 2.1rc2 released Message-ID: <200811181141.06225.faltet@pytables.org> =========================== Announcing PyTables 2.1rc2 =========================== PyTables is a library for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data with support for full 64-bit file addressing. PyTables runs on top of the HDF5 library and NumPy package for achieving maximum throughput and convenient use. This is the second release candidate for 2.1, and I have decided to release it because many bugs have been fixed and some enhancements have been added since 2.1rc1. For details, see the ``RELEASE_NOTES.txt`` at: http://www.pytables.org/moin/ReleaseNotes/Release_2.1rc2 PyTables 2.1 introduces important improvements, like much faster node opening, creation or navigation, a file-based way to fine-tune the different PyTables parameters (fully documented now in a new appendix of the UG) and support for multidimensional atoms in EArray/CArray objects. Regarding the Pro edition, four different kind of indexes are supported so that the user can choose the best for her needs. Also, and due to the introduction of the concept of chunkmaps in OPSI, the responsiveness of complex queries with low selectivity has improved quite a lot. And last but not least, it is possible now to sort tables by a specific field with no practical limit in size (tables up to 2**48 rows). You can download a source package of the version 2.1rc2 with generated PDF and HTML docs and binaries for Windows from http://www.pytables.org/download/preliminary Finally, and for the first time, an evaluation version for PyTables Pro has been made available in: http://www.pytables.org/download/evaluation Please read the evaluation license for terms of use of this version: http://www.pytables.org/moin/PyTablesProEvaluationLicense For an on-line version of the manual, visit: http://www.pytables.org/docs/manual-2.1rc2 Resources ========= Go to the PyTables web site for more details: http://www.pytables.org About the HDF5 library: http://hdfgroup.org/HDF5/ About NumPy: http://numpy.scipy.org/ Acknowledgments =============== Thanks to many users who provided feature improvements, patches, bug reports, support and suggestions. See the ``THANKS`` file in the distribution package for a (incomplete) list of contributors. Many thanks also to SourceForge who have helped to make and distribute this package! And last, but not least thanks a lot to the HDF5 and NumPy (and numarray!) makers. Without them PyTables simply would not exist. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- The PyTables Team From nicolas.roux at st.com Tue Nov 18 05:42:25 2008 From: nicolas.roux at st.com (Nicolas ROUX) Date: Tue, 18 Nov 2008 11:42:25 +0100 Subject: [Numpy-discussion] Missing numpy.i In-Reply-To: <9457e7c80810160620i2aeec4e3o4df1ae82906a1490@mail.gmail.com> Message-ID: <001901c9496a$58b83cc0$e7ad810a@gnb.st.com> Hi, About the missing doc directory in the windows install in latest numpy release, could you please add it ? (please see below the previous thread) Thanks, Cheers, Nicolas. -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of Stefan van der Walt Sent: Thursday, October 16, 2008 3:20 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Missing numpy.i 2008/10/16 Nicolas ROUX : > Thanks for your reply ;-) > > In fact, I was talking about the 1.20 release installer, which is not > including the numpy.i any more. This may have been an oversight. The docs directory moved out of the source tree, so it needs to be added to the installer separately. David, could we install the docs dir as well? You should be able to use the numpy.i from the URL I provided. Cheers Stefan _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion From dwf at cs.toronto.edu Tue Nov 18 05:48:06 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 18 Nov 2008 05:48:06 -0500 Subject: [Numpy-discussion] Getting indices from numpy array with condition In-Reply-To: <000b01c94968$92766970$e7ad810a@gnb.st.com> References: <000b01c94968$92766970$e7ad810a@gnb.st.com> Message-ID: <26989EA5-572A-4A3A-9374-51A216D54EDA@cs.toronto.edu> On 18-Nov-08, at 5:29 AM, Nicolas ROUX wrote: > Hi, > > Maybe this is not so clever, but I can't find it in the doc. > I need to get all indices/index of all occurrences of a value in a > numpy > array > > > As example: > > a = numpy.array([1,2,3],[4,5,6],[7,8,9]) > I need to get the indice/index of all array elements where a[a>3] numpy.where(a > 3) David From nicolas.roux at st.com Tue Nov 18 05:49:30 2008 From: nicolas.roux at st.com (Nicolas ROUX) Date: Tue, 18 Nov 2008 11:49:30 +0100 Subject: [Numpy-discussion] Getting indices from numpy array withcondition In-Reply-To: <200811181139.53678.faltet@pytables.org> Message-ID: <001a01c9496b$56223320$e7ad810a@gnb.st.com> Thanks ;-) It's pretty good, as I can apply it to any condition. Cheers, Nivolas. -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of Francesc Alted Sent: Tuesday, November 18, 2008 11:40 AM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Getting indices from numpy array withcondition A Tuesday 18 November 2008, Nicolas ROUX escrigu?: > Hi, > > Maybe this is not so clever, but I can't find it in the doc. > I need to get all indices/index of all occurrences of a value in a > numpy array > > > As example: > > a = numpy.array([1,2,3],[4,5,6],[7,8,9]) > I need to get the indice/index of all array elements where a[a>3] > > Any fast/easy way to write this ? Perhaps the nonzero() method is what you are looking for: In [9]: (a > 3).nonzero() Out[9]: (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) Cheers, -- Francesc Alted _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion From pgmdevlist at gmail.com Tue Nov 18 09:53:31 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 18 Nov 2008 09:53:31 -0500 Subject: [Numpy-discussion] difference between ma.masked and ma.masked_array? In-Reply-To: <6a17e9ee0811172139w4bf45167q6e925263b9ee2a3a@mail.gmail.com> References: <6a17e9ee0811172139w4bf45167q6e925263b9ee2a3a@mail.gmail.com> Message-ID: All, ma.masked_array is a constructor function for ma.MaskedArray, like np.array is a constructor for np.ndarray. Its intended use is: a = ma.masked_array(yourdata, mask=yourmask, dtype=yourdtype) to which you can add the keyword arguments presented in the doc. ma.masked is a special constant used to check whether one particular value of your array is masked, or to mask one particular value. Note that setting a value of an array `x` to `ma.masked` will only work if `x` is a MaskedArray, otherwise the value will be set to zero (if the array is not a MaskedArray or a child of, there's no mask to modify...) On Nov 18, 2008, at 12:39 AM, Scott Sinclair wrote: > 2008/11/17 Timmie : > >> I am using this expression along with the scikit.timeseries: >> >> series[(series.years>2000)&(series.years<2010)]=np.ma.masked >> >> => but now, my series does not get masked. ??? Please provide a self-contained example, so that I can check whether it's a bug or not. On my machine, the following works: >>> import numpy as np, numpy.ma as ma, scikits.timeseries as ts >>> series = ts.time_series(np.random.rand(10), start_date=ts.Date('A',year=2000)) >>> series timeseries([ 0.79956673 0.26526638 0.38811214 0.2119525 0.55870333 0.73263595 0.24395387 0.35595176 0.86357901 0.48562605], dates = [2000 ... 2009], freq = A-DEC) Now, let's mask the values after 2005 (included) >>> series[series.years>2004] = ma.masked >>> series timeseries([0.799566726537 0.265266376704 0.388112137692 0.211952497171 0.558703334124 -- -- -- -- --], dates = [2000 ... 2009], freq = A-DEC) > > There's some in progress documentation in the Numpy doc app at > http://docs.scipy.org/numpy/docs/numpy.ma.core.masked_where/ that > hasn't yet made it's way to the reference manual > http://docs.scipy.org/doc/numpy/reference/ I know I'm a tad lagging here documentation-wise... Any help welcome. From f.yw at hotmail.com Tue Nov 18 13:33:07 2008 From: f.yw at hotmail.com (frank wang) Date: Tue, 18 Nov 2008 11:33:07 -0700 Subject: [Numpy-discussion] fill() function does not work. In-Reply-To: References: <6a17e9ee0811172139w4bf45167q6e925263b9ee2a3a@mail.gmail.com> Message-ID: Hi, My numpy is 1.2.1 and python is 2.5.2. In python, I did: from numpy import * x=array([1,2,3]) z=x.fill(x) print z None z should be filled with zero. I do not knwo why I got None. Can anyone help me on this? Thanks Frank _________________________________________________________________ Proud to be a PC? Show the world. Download the ?I?m a PC? Messenger themepack now. hthttp://clk.atdmt.com/MRT/go/119642558/direct/01/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Tue Nov 18 13:36:23 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 18 Nov 2008 19:36:23 +0100 Subject: [Numpy-discussion] fill() function does not work. In-Reply-To: References: <6a17e9ee0811172139w4bf45167q6e925263b9ee2a3a@mail.gmail.com> Message-ID: >From the docstring: a.fill(value) -> None. Fill the array with the scalar value. The method modifies the array, but does not return it. Matthieu 2008/11/18 frank wang : > Hi, > > My numpy is 1.2.1 and python is 2.5.2. > > In python, I did: > > from numpy import * > x=array([1,2,3]) > z=x.fill(x) > print z > None > > z should be filled with zero. I do not knwo why I got None. Can anyone help > me on this? > > Thanks > > Frank > > ________________________________ > Proud to be a PC? Show the world. Download the "I'm a PC" Messenger > themepack now. Download now. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From f.yw at hotmail.com Tue Nov 18 13:43:04 2008 From: f.yw at hotmail.com (frank wang) Date: Tue, 18 Nov 2008 11:43:04 -0700 Subject: [Numpy-discussion] fill() function does not work. In-Reply-To: References: <6a17e9ee0811172139w4bf45167q6e925263b9ee2a3a@mail.gmail.com> Message-ID: Thanks for the quick reply. It is my fault to overlook the manual. Frank> Date: Tue, 18 Nov 2008 19:36:23 +0100> From: matthieu.brucher at gmail.com> To: numpy-discussion at scipy.org> Subject: Re: [Numpy-discussion] fill() function does not work.> > >From the docstring:> > a.fill(value) -> None. Fill the array with the scalar value.> > The method modifies the array, but does not return it.> > Matthieu> > 2008/11/18 frank wang :> > Hi,> >> > My numpy is 1.2.1 and python is 2.5.2.> >> > In python, I did:> >> > from numpy import *> > x=array([1,2,3])> > z=x.fill(x)> > print z> > None> >> > z should be filled with zero. I do not knwo why I got None. Can anyone help> > me on this?> >> > Thanks> >> > Frank> >> > ________________________________> > Proud to be a PC? Show the world. Download the "I'm a PC" Messenger> > themepack now. Download now.> > _______________________________________________> > Numpy-discussion mailing list> > Numpy-discussion at scipy.org> > http://projects.scipy.org/mailman/listinfo/numpy-discussion> >> >> > > > -- > Information System Engineer, Ph.D.> Website: http://matthieu-brucher.developpez.com/> Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92> LinkedIn: http://www.linkedin.com/in/matthieubrucher> _______________________________________________> Numpy-discussion mailing list> Numpy-discussion at scipy.org> http://projects.scipy.org/mailman/listinfo/numpy-discussion _________________________________________________________________ Access your email online and on the go with Windows Live Hotmail. http://windowslive.com/Explore/Hotmail?ocid=TXT_TAGLM_WL_hotmail_acq_access_112008 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dfranci at seas.upenn.edu Tue Nov 18 13:44:38 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Tue, 18 Nov 2008 13:44:38 -0500 Subject: [Numpy-discussion] Fwd: fill() function does not work. In-Reply-To: References: <6a17e9ee0811172139w4bf45167q6e925263b9ee2a3a@mail.gmail.com> Message-ID: <9fddf64a0811181044h5ae1ceaby7b51c342ad813e93@mail.gmail.com> ---------- Forwarded message ---------- From: Matthieu Brucher Date: Tue, Nov 18, 2008 at 1:36 PM Subject: Re: [Numpy-discussion] fill() function does not work. To: Discussion of Numerical Python >From the docstring: a.fill(value) -> None. Fill the array with the scalar value. The method modifies the array, but does not return it. Matthieu 2008/11/18 frank wang : > Hi, > > My numpy is 1.2.1 and python is 2.5.2. > > In python, I did: > > from numpy import * > x=array([1,2,3]) > z=x.fill(x) > print z > None > > z should be filled with zero. I do not knwo why I got None. Can anyone help > me on this? > > Thanks > > Frank > > ________________________________ > Proud to be a PC? Show the world. Download the "I'm a PC" Messenger > themepack now. Download now. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion For additional clarity: you write x.fill(x), but this is not what you want because this would take the first entry in x and fill the remainder of x with it. Instead, you want x.fill(0). Alternately, the function zeros is useful. -Frank From ezindy at gmail.com Tue Nov 18 14:02:46 2008 From: ezindy at gmail.com (Egor Zindy) Date: Tue, 18 Nov 2008 19:02:46 +0000 Subject: [Numpy-discussion] ANN: numpy.i - added managed deallocation to ARGOUTVIEW_ARRAY1 (ARGOUTVIEWM_ARRAY1) In-Reply-To: <491F8F4A.30009@gmail.com> References: <491F8F4A.30009@gmail.com> Message-ID: <49231156.1060209@gmail.com> Hello list, I just finished coding all the ARGOUTVIEWM fragments in numpy.i and wrote a wiki to explain what I did: http://code.google.com/p/ezwidgets/wiki/NumpyManagedMemory No new information in there, but everything (explanations, links, code) is provided on a single page. Regards, Egor Egor Zindy wrote: > Dear List, > > after I tried to write a simple ARGOUTVIEW_ARRAY1 example (see > http://code.google.com/p/ezwidgets/wiki/NumpySWIGMinGW#A_simple_ARGOUTVIEW_ARRAY1_example > ), I started wondering about memory deallocation. Turns out a few > clever people already did all the thinking (see > http://blog.enthought.com/?p=62 ) and a few more clever people use > this in a swig file (see http://niftilib.sourceforge.net/pynifti, file > nifticlib.i). > > All this concentrated knowledge helped me implement a single > ARGOUTVIEWM_ARRAY1 fragment in numpy.i to do some testing (see > attached numpy.i). > > How to use it? In yourfile.i, the %init function has a few lines added > to it: > > %init %{ > import_array(); > > /* initialize the new Python type for memory deallocation */ > _MyDeallocType.tp_new = PyType_GenericNew; > if (PyType_Ready(&_MyDeallocType) < 0) > return; %} > > ... and that's it! then just use ARGOUTVIEWM_ARRAY1 instead of > ARGOUTVIEW_ARRAY1 and python does the deallocation for you when the > python array is destroyed (see the examples attached). > > Everything compiles, but it would be nice to get rid of the warnings... > > ezalloc_wrap.c:2763: warning: initialization from incompatible pointer > type > ezalloc_wrap.c: In function `_wrap_alloc_managed': > ezalloc_wrap.c:2844: warning: assignment from incompatible pointer type > writing build\temp.win32-2.5\Release\_ezalloc.def > > Compilation: > python setup_alloc.py build > > Testing: > The attached test_alloc.py does 2048 allocations of 1MB each for > managed and unmanaged arrays. Output on my XP laptop with 1GB RAM as > follows: > > ARGOUTVIEWM_ARRAY1 (managed arrays) - 2048 allocations (1048576 bytes > each) > Done! > > ARGOUTVIEW_ARRAY1 (unmanaged, leaking) - 2048 allocations (1048576 > bytes each) > Step 482 failed > > > TODO: > Find a better name for the methods (not sure I like > ARGOUTVIEWM_ARRAY), then do the missing fragments (2D arrays), clear > the warnings and verify the method for a possible inclusion in the > official numpy.i file. > > Thank you for reading! > > Regards, > Egor > From Chris.Barker at noaa.gov Tue Nov 18 14:59:47 2008 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 18 Nov 2008 11:59:47 -0800 Subject: [Numpy-discussion] ANN: numpy.i - added managed deallocation to ARGOUTVIEW_ARRAY1 (ARGOUTVIEWM_ARRAY1) In-Reply-To: <49231156.1060209@gmail.com> References: <491F8F4A.30009@gmail.com> <49231156.1060209@gmail.com> Message-ID: <49231EB3.8060802@noaa.gov> Egor Zindy wrote: > I just finished coding all the ARGOUTVIEWM fragments in numpy.i and > wrote a wiki to explain what I did: > http://code.google.com/p/ezwidgets/wiki/NumpyManagedMemory thanks! good stuff. It would be great if you could put that in the numpy (scipy?) wiki though, so more folks will find it. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Tue Nov 18 15:06:06 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 18 Nov 2008 14:06:06 -0600 Subject: [Numpy-discussion] Getting indices from numpy array with condition In-Reply-To: <26989EA5-572A-4A3A-9374-51A216D54EDA@cs.toronto.edu> References: <000b01c94968$92766970$e7ad810a@gnb.st.com> <26989EA5-572A-4A3A-9374-51A216D54EDA@cs.toronto.edu> Message-ID: <3d375d730811181206j174c7869x2f7526d85c608580@mail.gmail.com> On Tue, Nov 18, 2008 at 04:48, David Warde-Farley wrote: > On 18-Nov-08, at 5:29 AM, Nicolas ROUX wrote: > >> Hi, >> >> Maybe this is not so clever, but I can't find it in the doc. >> I need to get all indices/index of all occurrences of a value in a >> numpy >> array >> >> >> As example: >> >> a = numpy.array([1,2,3],[4,5,6],[7,8,9]) >> I need to get the indice/index of all array elements where a[a>3] > > numpy.where(a > 3) I like to discourage this use of where(). For some reason, back in Numeric's days, where() got stuck with two functionalities. nonzero() is the preferred function for this functionality. IMO, where(cond, if_true, if_false) should be the only use of where(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From f.yw at hotmail.com Tue Nov 18 15:15:50 2008 From: f.yw at hotmail.com (frank wang) Date: Tue, 18 Nov 2008 13:15:50 -0700 Subject: [Numpy-discussion] Fast and efficient way to convert an array into binary In-Reply-To: References: Message-ID: Hi, I have a large array and I want to convert it into a binary array. For exampe, y=array([1,2,3]), after the convertion I want the result array([0,0,0,1,0,0,1,0,0,0,1,1]). Each digit is converted into 4 bits in this example. In my real problem I want to convert each digit to 8 bits. My data is numpy.ndarray and the shape is, say, (1000,). Are there fast and efficient solution for this? Thanks Frank _________________________________________________________________ Color coding for safety: Windows Live Hotmail alerts you to suspicious email. http://windowslive.com/Explore/Hotmail?ocid=TXT_TAGLM_WL_hotmail_acq_safety_112008 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob at roryoung.co.uk Tue Nov 18 15:21:34 2008 From: rob at roryoung.co.uk (Robert Young) Date: Tue, 18 Nov 2008 20:21:34 +0000 Subject: [Numpy-discussion] Reduced row echelon form Message-ID: <32a1c320811181221j1320679y35a0b587d55002c9@mail.gmail.com> Hi, Is there a method in NumPy that reduces a matrix to it's reduced row echelon form? I'm brand new to both NumPy and linear algebra, and I'm not quite sure where to look. Thanks Rob -------------- next part -------------- An HTML attachment was scrubbed... URL: From gregor.thalhammer at gmail.com Tue Nov 18 15:42:48 2008 From: gregor.thalhammer at gmail.com (Gregor Thalhammer) Date: Tue, 18 Nov 2008 21:42:48 +0100 Subject: [Numpy-discussion] Fast and efficient way to convert an array into binary In-Reply-To: References: Message-ID: <492328C8.1040902@googlemail.com> frank wang schrieb: > Hi, > > I have a large array and I want to convert it into a binary array. For > exampe, y=array([1,2,3]), after the convertion I want the result > array([0,0,0,1,0,0,1,0,0,0,1,1]). Each digit is converted into 4 bits > in this example. In my real problem I want to convert each digit to 8 > bits. My data is numpy.ndarray and the shape is, say, (1000,). > > Are there fast and efficient solution for this? > > >>> a = array([1,2,3], dtype = uint8) >>> unpackbits(a) array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1], dtype=uint8) Gregor From robert.kern at gmail.com Tue Nov 18 15:50:06 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 18 Nov 2008 14:50:06 -0600 Subject: [Numpy-discussion] Reduced row echelon form In-Reply-To: <32a1c320811181221j1320679y35a0b587d55002c9@mail.gmail.com> References: <32a1c320811181221j1320679y35a0b587d55002c9@mail.gmail.com> Message-ID: <3d375d730811181250pd65f0ecs858dcb9f101b49fc@mail.gmail.com> On Tue, Nov 18, 2008 at 14:21, Robert Young wrote: > Hi, > > Is there a method in NumPy that reduces a matrix to it's reduced row echelon > form? I'm brand new to both NumPy and linear algebra, and I'm not quite sure > where to look. No, we don't have a function to do that. What do you need it for? While many linear algebra books talk about it, I have never seen a practical use for it outside of manual matrix computations. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aarchiba at physics.mcgill.ca Tue Nov 18 17:24:08 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Tue, 18 Nov 2008 17:24:08 -0500 Subject: [Numpy-discussion] Reduced row echelon form In-Reply-To: <32a1c320811181221j1320679y35a0b587d55002c9@mail.gmail.com> References: <32a1c320811181221j1320679y35a0b587d55002c9@mail.gmail.com> Message-ID: 2008/11/18 Robert Young : > Is there a method in NumPy that reduces a matrix to it's reduced row echelon > form? I'm brand new to both NumPy and linear algebra, and I'm not quite sure > where to look. Unfortunately, reduced row-echelon form doesn't really work when using approximate values: any construction of the reduced row-echelon form forces you many times to ask "is this number exactly zero or just very small?"; if it's zero you do one thing, but if it's very small you do something totally different - usually divide by it. With floating-point numbers, every calculation is approximate, and such a method will blow up completely. If you really need reduced row echelon form, you have to start with exact numbers and use a package that does exact computations (I think SymPy might be a place to start). In practice, numerical linear algebra is rather different from linear algebra as presented in math classes. In particular, problems that you might solve on the chalkboard with row reduction or the like are instead solved by matrix factorizations of special forms. For example LU factorization writes a matrix as a product of a lower-triangular matrix and an upper-triangular matrix. This allows, for example, very easy calculation of determinants. (It also allows fast solution of linear equations, just like reduced row echelon form.) But LU factorization is much more resistant to the problems involved in working with approximate numbers. If you have a problem that is classically solved with something like reduced row echelon form, you first need to think about how to make it make sense in an approximate setting. For example, the rank of a matrix: if two rows are exactly equal, the matrix is singular. But if the rows are even slightly different, the matrix is non-singular. There's just no way to make this work precisely using approximate numbers. (Sometimes you can rephrase the problem in a way that does work; singular value decomposition lets you deal with ranks in a sensible fashion, by giving a reasonable criterion for when you want to consider a linear combination equal to zero.) If, however, your question makes sense with approximate numbers (solving Ax=b usually does, for example) but your algorithm for getting the answer doesn't work, look for a numerical matrix decomposition that will work. The singular value decomposition is the swiss army knife for this sort of problem, but others can work better in some cases. Numerical Recipes is the traditional book to recommend in this sort of case. Their algorithms may not be the best, and don't use their code, but their descriptions do instill a sensible caution. Good luck, Anne From jason-sage at creativetrax.com Tue Nov 18 18:17:00 2008 From: jason-sage at creativetrax.com (jason-sage at creativetrax.com) Date: Tue, 18 Nov 2008 17:17:00 -0600 Subject: [Numpy-discussion] Reduced row echelon form In-Reply-To: References: <32a1c320811181221j1320679y35a0b587d55002c9@mail.gmail.com> Message-ID: <49234CEC.9070803@creativetrax.com> Anne Archibald wrote: > 2008/11/18 Robert Young : > > >> Is there a method in NumPy that reduces a matrix to it's reduced row echelon >> form? I'm brand new to both NumPy and linear algebra, and I'm not quite sure >> where to look. >> > > Unfortunately, reduced row-echelon form doesn't really work when using > approximate values: any construction of the reduced row-echelon form > forces you many times to ask "is this number exactly zero or just very > small?"; if it's zero you do one thing, but if it's very small you do > something totally different - usually divide by it. With > floating-point numbers, every calculation is approximate, and such a > method will blow up completely. If you really need reduced row echelon > form, you have to start with exact numbers and use a package that does > exact computations (I think SymPy might be a place to start). > I also recommend Sage here (http://www.sagemath.org). For example, here is a session in which we calculate the reduced echelon form of a matrix over the rationals (QQ): sage: a=matrix(QQ,[[1,2,3],[4,5,6],[7,8,9]]) sage: a.echelon_form() [ 1 0 -1] [ 0 1 2] [ 0 0 0] Sage has quite a bit of advanced functionality for exact linear algebra. (It also uses numpy as a backend to provide some nice functionality for approximate linear algebra). Here is a short tutorial on constructions in linear algebra: http://sagemath.org/doc/const/node28.html Jason From dwf at cs.toronto.edu Wed Nov 19 02:31:04 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 19 Nov 2008 02:31:04 -0500 Subject: [Numpy-discussion] Getting indices from numpy array with condition In-Reply-To: <3d375d730811181206j174c7869x2f7526d85c608580@mail.gmail.com> References: <000b01c94968$92766970$e7ad810a@gnb.st.com> <26989EA5-572A-4A3A-9374-51A216D54EDA@cs.toronto.edu> <3d375d730811181206j174c7869x2f7526d85c608580@mail.gmail.com> Message-ID: On 18-Nov-08, at 3:06 PM, Robert Kern wrote: > I like to discourage this use of where(). For some reason, back in > Numeric's days, where() got stuck with two functionalities. nonzero() > is the preferred function for this functionality. IMO, where(cond, > if_true, if_false) should be the only use of where(). Hmm. nonzero() seems semantically awkward to be calling on a boolean array, n'est pas? Truth be told I didn't know about the if_true, if_false use of where() until just now ;) David From robert.kern at gmail.com Wed Nov 19 02:38:54 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Nov 2008 01:38:54 -0600 Subject: [Numpy-discussion] Getting indices from numpy array with condition In-Reply-To: References: <000b01c94968$92766970$e7ad810a@gnb.st.com> <26989EA5-572A-4A3A-9374-51A216D54EDA@cs.toronto.edu> <3d375d730811181206j174c7869x2f7526d85c608580@mail.gmail.com> Message-ID: <3d375d730811182338g2e8cad97k25671b36379558e0@mail.gmail.com> On Wed, Nov 19, 2008 at 01:31, David Warde-Farley wrote: > On 18-Nov-08, at 3:06 PM, Robert Kern wrote: > >> I like to discourage this use of where(). For some reason, back in >> Numeric's days, where() got stuck with two functionalities. nonzero() >> is the preferred function for this functionality. IMO, where(cond, >> if_true, if_false) should be the only use of where(). > > Hmm. nonzero() seems semantically awkward to be calling on a boolean > array, n'est pas? Why? In Python and numpy, False==0 and True==1. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Wed Nov 19 05:14:55 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 19 Nov 2008 12:14:55 +0200 Subject: [Numpy-discussion] Reduced row echelon form In-Reply-To: <32a1c320811181221j1320679y35a0b587d55002c9@mail.gmail.com> References: <32a1c320811181221j1320679y35a0b587d55002c9@mail.gmail.com> Message-ID: <9457e7c80811190214wac6b9f2k13f763992b874b6c@mail.gmail.com> Hi Robert, 2008/11/18 Robert Young : > Is there a method in NumPy that reduces a matrix to it's reduced row echelon > form? I'm brand new to both NumPy and linear algebra, and I'm not quite sure > where to look. I use the Sympy package. It is small, easy to install, runs on pure Python, and gets the job done: >>> x = np.random.random((3,3)) >>> import sympy >>> sympy.Matrix(x).rref() ([1, 0, 0] [0, 1, 0] [0, 0, 1], [0, 1, 2]) If you are interested, I can also provide you with a version that runs under pure NumPy, using the LU-decomposition. Cheers St?fan From scott.sinclair.za at gmail.com Wed Nov 19 06:24:31 2008 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Wed, 19 Nov 2008 13:24:31 +0200 Subject: [Numpy-discussion] Getting indices from numpy array with condition In-Reply-To: <3d375d730811182338g2e8cad97k25671b36379558e0@mail.gmail.com> References: <000b01c94968$92766970$e7ad810a@gnb.st.com> <26989EA5-572A-4A3A-9374-51A216D54EDA@cs.toronto.edu> <3d375d730811181206j174c7869x2f7526d85c608580@mail.gmail.com> <3d375d730811182338g2e8cad97k25671b36379558e0@mail.gmail.com> Message-ID: <6a17e9ee0811190324p47cfadfdtf27f25d374dde35e@mail.gmail.com> 2008/11/19 Robert Kern : > On Wed, Nov 19, 2008 at 01:31, David Warde-Farley wrote: >> On 18-Nov-08, at 3:06 PM, Robert Kern wrote: >> >>> I like to discourage this use of where(). For some reason, back in >>> Numeric's days, where() got stuck with two functionalities. nonzero() >>> is the preferred function for this functionality. IMO, where(cond, >>> if_true, if_false) should be the only use of where(). >> >> Hmm. nonzero() seems semantically awkward to be calling on a boolean >> array, n'est pas? > > Why? In Python and numpy, False==0 and True==1. Well, using nonzero() isn't actually all that obvious until you understand that 1) a conditional expression like (a < 3) returns a boolean array *and* 2) that False==0 and True==1. These two things are not necessarily known to users who are scientists/engineers etc. I've added an example of this use case at http://docs.scipy.org/numpy/docs/numpy.core.fromnumeric.nonzero/ and added an FAQ http://www.scipy.org/FAQ , so that there's something to point to in future. Cheers, Scott From gael.varoquaux at normalesup.org Wed Nov 19 07:43:49 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 19 Nov 2008 13:43:49 +0100 Subject: [Numpy-discussion] Memmapping .npy In-Reply-To: <3d375d730810201245l6b235c9dmd7a7b732e0a386a5@mail.gmail.com> References: <20081020192056.GB21981@phare.normalesup.org> <20081020192531.GC21981@phare.normalesup.org> <3d375d730810201227s7ae8f7f4jab684cdc0bde03da@mail.gmail.com> <20081020193028.GD21981@phare.normalesup.org> <3d375d730810201245l6b235c9dmd7a7b732e0a386a5@mail.gmail.com> Message-ID: <20081119124349.GA3691@phare.normalesup.org> On Mon, Oct 20, 2008 at 02:45:57PM -0500, Robert Kern wrote: > > If it would be desireable, I could try to find time for a patch. I could > > use this in my work, and if I am going to implement it, I might as well > > do it for everybody. > load() would need to grow a mode= keyword argument to properly support > memory-mapping. Possibly, we could change it to > def load(filename, mmap_mode=None): > ... > With mmap_mode=None, just do a plain read; otherwise mmap with that > particular mode. We can introduce that immediately in 1.3 since > memmap=True never worked. I finally coded a trivial patch to do this. It not not something fancy, but it answers my needs. It is ticket 954: http://scipy.org/scipy/numpy/ticket/954 Ga?l From ndbecker2 at gmail.com Wed Nov 19 08:15:47 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 19 Nov 2008 08:15:47 -0500 Subject: [Numpy-discussion] convert pair of real arrays -> cmplx Message-ID: What's the best way to convert a pair of real arrays representing real and imag parts to a complex array? From aisaac at american.edu Wed Nov 19 10:14:33 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 19 Nov 2008 10:14:33 -0500 Subject: [Numpy-discussion] question about the documentation of linalg.solve Message-ID: <49242D59.9080007@american.edu> If if look at help(np.linalg.solve) I am told what it does but not how it does it. If I look at http://www.scipy.org/doc/numpy_api_docs/numpy.linalg.linalg.html#solve there is even less info. I'll guess the algorithm is Gaussian elimination, but how would I use the documentation to confirm this? (Shouldn't I be able to?) I don't think possible differences from the "lite" version are a reason for saying nothing... Thanks, Alan Isaac From discerptor at gmail.com Wed Nov 19 10:46:37 2008 From: discerptor at gmail.com (Joshua Lippai) Date: Wed, 19 Nov 2008 07:46:37 -0800 Subject: [Numpy-discussion] question about the documentation of linalg.solve In-Reply-To: <49242D59.9080007@american.edu> References: <49242D59.9080007@american.edu> Message-ID: <9911419a0811190746t6a8d1595odbc0b7a84b254f18@mail.gmail.com> If you use iPython and use "numpy.linalg.solve??", you can see the source code of the file numpy/linalg/linalg.py that corresponds to the solve(a,b) function, not just the docstring: def solve(a, b): """ Solve the equation ``a x = b`` for ``x``. Parameters ---------- a : array_like, shape (M, M) Input equation coefficients. b : array_like, shape (M,) Equation target values. Returns ------- x : array, shape (M,) Raises ------ LinAlgError If `a` is singular or not square. Examples -------- Solve the system of equations ``3 * x0 + x1 = 9`` and ``x0 + 2 * x1 = 8``: >>> a = np.array([[3,1], [1,2]]) >>> b = np.array([9,8]) >>> x = np.linalg.solve(a, b) >>> x array([ 2., 3.]) Check that the solution is correct: >>> (np.dot(a, x) == b).all() True """ a, _ = _makearray(a) b, wrap = _makearray(b) one_eq = len(b.shape) == 1 if one_eq: b = b[:, newaxis] _assertRank2(a, b) _assertSquareness(a) n_eq = a.shape[0] n_rhs = b.shape[1] if n_eq != b.shape[0]: raise LinAlgError, 'Incompatible dimensions' t, result_t = _commonType(a, b) # lapack_routine = _findLapackRoutine('gesv', t) if isComplexType(t): lapack_routine = lapack_lite.zgesv else: lapack_routine = lapack_lite.dgesv a, b = _fastCopyAndTranspose(t, a, b) pivots = zeros(n_eq, fortran_int) results = lapack_routine(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0) if results['info'] > 0: raise LinAlgError, 'Singular matrix' if one_eq: return wrap(b.ravel().astype(result_t)) else: return wrap(b.transpose().astype(result_t)) If this isn't enough, you may want to look at the whole file yourself. Josh On Wed, Nov 19, 2008 at 7:14 AM, Alan G Isaac wrote: > If if look at help(np.linalg.solve) I am > told what it does but not how it does it. > If I look at > http://www.scipy.org/doc/numpy_api_docs/numpy.linalg.linalg.html#solve > there is even less info. I'll guess the > algorithm is Gaussian elimination, but how > would I use the documentation to confirm this? > (Shouldn't I be able to?) > I don't think possible differences from the "lite" > version are a reason for saying nothing... > > Thanks, > Alan Isaac > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From charlesr.harris at gmail.com Wed Nov 19 13:10:54 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 19 Nov 2008 11:10:54 -0700 Subject: [Numpy-discussion] question about the documentation of linalg.solve In-Reply-To: <49242D59.9080007@american.edu> References: <49242D59.9080007@american.edu> Message-ID: On Wed, Nov 19, 2008 at 8:14 AM, Alan G Isaac wrote: > If if look at help(np.linalg.solve) I am > told what it does but not how it does it. > If I look at > http://www.scipy.org/doc/numpy_api_docs/numpy.linalg.linalg.html#solve > there is even less info. I'll guess the > algorithm is Gaussian elimination, but how > would I use the documentation to confirm this? > (Shouldn't I be able to?) > I don't think possible differences from the "lite" > version are a reason for saying nothing... > Gaussian elimination with pivoting, I believe. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Nov 19 14:27:11 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 19 Nov 2008 14:27:11 -0500 Subject: [Numpy-discussion] question about the documentation of linalg.solve In-Reply-To: References: <49242D59.9080007@american.edu> Message-ID: <4924688F.3010408@american.edu> Thanks Charles and Josh, but my question about the documentation goal remains. Here is how this came up. I mentioned to a class that I have using NumPy that solving Ax=b with an inverse is computationally wasteful and also has accuracy problems, and I recommend using `solve` instead. So the question arises: what algorithm does `solve` use? I speculate Gaussian elimination, but cannot get a quick answer from `help` nor from the web. In contrast, if they are using Matlab, they look at which tell them the algorithm is Gaussian elimination with partial pivoting and provide a link to algorithm details. So my question is not just what is the algorithm but also, what is the documentation goal? Thanks, Alan From robert.kern at gmail.com Wed Nov 19 15:01:19 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Nov 2008 14:01:19 -0600 Subject: [Numpy-discussion] convert pair of real arrays -> cmplx In-Reply-To: References: Message-ID: <3d375d730811191201k31ee18cen5381d8ba766810c7@mail.gmail.com> On Wed, Nov 19, 2008 at 07:15, Neal Becker wrote: > What's the best way to convert a pair of real arrays representing real and imag parts to a complex array? What do you mean by "best"? Easiest to type? Easiest to read? Fastest? Memory efficient? Ideally, they'd all be the same, but alas, they are not. Here are two possibilities: z = real +1j*imag z = np.empty(real.shape, complex) z.real = real z.imag = imag You can wrap the latter into a function, and then you probably would have the best of both worlds. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From silva at lma.cnrs-mrs.fr Wed Nov 19 17:20:24 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Wed, 19 Nov 2008 23:20:24 +0100 Subject: [Numpy-discussion] question about the documentation of linalg.solve In-Reply-To: <4924688F.3010408@american.edu> References: <49242D59.9080007@american.edu> <4924688F.3010408@american.edu> Message-ID: <1227133224.2923.4.camel@localhost.localdomain> Le mercredi 19 novembre 2008 ? 14:27 -0500, Alan G Isaac a ?crit : > So my question is not just what is the algorithm > but also, what is the documentation goal? Concerning the algorithm (only): in Joshua answer, you have might have seen that solve is a wrapper to lapack routines *gesv (z* or d* depending on the input type). http://www.netlib.org/lapack/complex16/zgesv.f and http://www.netlib.org/lapack/double/dgesv.f mention a LU decomposition with partial pivoting and row interchanges. -- Fabricio From ndbecker2 at gmail.com Wed Nov 19 17:22:59 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 19 Nov 2008 17:22:59 -0500 Subject: [Numpy-discussion] matio lib Message-ID: A library for matlab io, perhaps this could be useful? http://sourceforge.net/projects/matio From gael.varoquaux at normalesup.org Wed Nov 19 18:30:01 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 20 Nov 2008 00:30:01 +0100 Subject: [Numpy-discussion] question about the documentation of linalg.solve In-Reply-To: <4924688F.3010408@american.edu> References: <49242D59.9080007@american.edu> <4924688F.3010408@american.edu> Message-ID: <20081119233001.GB30187@phare.normalesup.org> On Wed, Nov 19, 2008 at 02:27:11PM -0500, Alan G Isaac wrote: > So my question is not just what is the algorithm > but also, what is the documentation goal? That's a good question. I feel the documentation should be as precise as possible, and thus answer this question. Currently it doesn't, but that's just a general sign that our docs are severely lacking. The problem is that we are lacking man-power to write and improve them. This is one of the reason we have deployed the doc server that allows you to easily improve the numpy docs. To improve this doc to raise it to the standards you believe are suited, it is has easy as getting a logging on the server, and editing: http://docs.scipy.org/numpy/docs/numpy.linalg.linalg.solve/ The changes are reviewed, so don't hesitate. Ga?l From dfranci at seas.upenn.edu Wed Nov 19 18:30:08 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Wed, 19 Nov 2008 18:30:08 -0500 Subject: [Numpy-discussion] simple python question on what happens when functions are declared Message-ID: <9fddf64a0811191530r2fae0e6bmdbaedfd0a8206896@mail.gmail.com> Hi, Can someone please explain what happens here: In testfile.py: x = 5 def arbFunc(): print x del x print "Done with Test" arbFunc() Then run the file with python testfile.py As opposed to x = 5 print x del x print "Done with Test" Which of course works fine. This question is of importance to me, because I may may very large arrays that strain the limits of the machine's memory and I need to clean up after myself when I generate other large objects temporarily. Thanks in advance! Frank From robert.kern at gmail.com Wed Nov 19 18:39:08 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Nov 2008 17:39:08 -0600 Subject: [Numpy-discussion] simple python question on what happens when functions are declared In-Reply-To: <9fddf64a0811191530r2fae0e6bmdbaedfd0a8206896@mail.gmail.com> References: <9fddf64a0811191530r2fae0e6bmdbaedfd0a8206896@mail.gmail.com> Message-ID: <3d375d730811191539k44666306td19889d411a68452@mail.gmail.com> On Wed, Nov 19, 2008 at 17:30, Frank Lagor wrote: > Hi, > > Can someone please explain what happens here: > > In testfile.py: > > x = 5 > def arbFunc(): > print x You get an UnboundLocalError here. > del x During the compilation process, Python sees this statement and assumes that you meant 'x' to be local to this function. Otherwise "del x" has no meaning. A del statement inside a function cannot affect the global namespace without the statement "global x" at the top of the function. I *don't* recommend using the global statement in order to do this. Acquire the resource inside the function, then it will go away when you return from the function. If you need to pass it to other functions, pass it as an argument. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From doutriaux1 at llnl.gov Wed Nov 19 18:57:45 2008 From: doutriaux1 at llnl.gov (=?UTF-8?Q?Charles_=D8=B3=D9=85=D9=8A=D8=B1_Doutriaux?=) Date: Wed, 19 Nov 2008 15:57:45 -0800 Subject: [Numpy-discussion] numpy.ma.mod missing Message-ID: <1BA1100E-E1F9-4574-AF7F-C05E30EBEDB9@llnl.gov> Hello, Can I request that "mod" be added to numpy.ma ? Thx, C> From dfranci at seas.upenn.edu Wed Nov 19 20:10:27 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Wed, 19 Nov 2008 20:10:27 -0500 Subject: [Numpy-discussion] simple python question on what happens when functions are declared In-Reply-To: <3d375d730811191539k44666306td19889d411a68452@mail.gmail.com> References: <9fddf64a0811191530r2fae0e6bmdbaedfd0a8206896@mail.gmail.com> <3d375d730811191539k44666306td19889d411a68452@mail.gmail.com> Message-ID: <9fddf64a0811191710y501b5f85o41e097efd62fa087@mail.gmail.com> Thank you very much Robert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Nov 19 20:33:09 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 19 Nov 2008 18:33:09 -0700 Subject: [Numpy-discussion] question about the documentation of linalg.solve In-Reply-To: <1227133224.2923.4.camel@localhost.localdomain> References: <49242D59.9080007@american.edu> <4924688F.3010408@american.edu> <1227133224.2923.4.camel@localhost.localdomain> Message-ID: On Wed, Nov 19, 2008 at 3:20 PM, Fabrice Silva wrote: > Le mercredi 19 novembre 2008 ? 14:27 -0500, Alan G Isaac a ?crit : > > So my question is not just what is the algorithm > > but also, what is the documentation goal? > > Concerning the algorithm (only): > in Joshua answer, you have might have seen that solve is a wrapper to > lapack routines *gesv (z* or d* depending on the input type). > Which, IIRC, calls *getrf to get the LU factorization of the lhs matrix A. Here: * DGESV computes the solution to a real system of linear equations * A * X = B, * where A is an N-by-N matrix and X and B are N-by-NRHS matrices. * * The LU decomposition with partial pivoting and row interchanges is * used to factor A as * A = P * L * U, * where P is a permutation matrix, L is unit lower triangular, and U is * upper triangular. The factored form of A is then used to solve the * system of equations A * X = B. * Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dfranci at seas.upenn.edu Wed Nov 19 23:36:59 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Wed, 19 Nov 2008 23:36:59 -0500 Subject: [Numpy-discussion] unpickle Message-ID: <9fddf64a0811192036p6ead5aado93b0d626dc56a097@mail.gmail.com> I have only used pickle a little and I did not see this in the docstring: Is there anyway to unpickle in reverse order? It appears the pickling works like a queue. I execute: pickle.dump(obj1,file) pickle.dump(obj2,file) Then when I go to retrieve: pickle.load(file) returns obj1 pickle.load(file) returns obj2 I am looking for a quick way to grab the last thing pickled (like a stack instead). Thanks in advance, Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Nov 19 23:40:48 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Nov 2008 22:40:48 -0600 Subject: [Numpy-discussion] unpickle In-Reply-To: <9fddf64a0811192036p6ead5aado93b0d626dc56a097@mail.gmail.com> References: <9fddf64a0811192036p6ead5aado93b0d626dc56a097@mail.gmail.com> Message-ID: <3d375d730811192040j68b8279dr352763572257bd8f@mail.gmail.com> On Wed, Nov 19, 2008 at 22:36, Frank Lagor wrote: > I have only used pickle a little and I did not see this in the docstring: > > Is there anyway to unpickle in reverse order? No. This, and your previous question, are mostly off-topic for numpy-discussion. You may want to ask such questions in the future on more general Python mailing lists. http://www.python.org/community/lists/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From scott.sinclair.za at gmail.com Thu Nov 20 00:58:52 2008 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Thu, 20 Nov 2008 07:58:52 +0200 Subject: [Numpy-discussion] question about the documentation of linalg.solve In-Reply-To: References: <49242D59.9080007@american.edu> <4924688F.3010408@american.edu> <1227133224.2923.4.camel@localhost.localdomain> Message-ID: <6a17e9ee0811192158v7e39777bxa9508a88f4941a3e@mail.gmail.com> 2008/11/20 Charles R Harris : > > On Wed, Nov 19, 2008 at 3:20 PM, Fabrice Silva > wrote: >> >> Le mercredi 19 novembre 2008 ? 14:27 -0500, Alan G Isaac a ?crit : >> > So my question is not just what is the algorithm >> > but also, what is the documentation goal? >> >> Concerning the algorithm (only): >> in Joshua answer, you have might have seen that solve is a wrapper to >> lapack routines *gesv (z* or d* depending on the input type). > > Which, IIRC, calls *getrf to get the LU factorization of the lhs matrix A. > Here: > > * DGESV computes the solution to a real system of linear equations > * A * X = B, > * where A is an N-by-N matrix and X and B are N-by-NRHS matrices. > > * > * The LU decomposition with partial pivoting and row interchanges is > * used to factor A as > * A = P * L * U, > * where P is a permutation matrix, L is unit lower triangular, and U is > * upper triangular. The factored form of A is then used to solve the > > * system of equations A * X = B. > * It's not always fun to read the code in order to find out what a function does. So I guess the documentation goal is to eventually add sufficient detail, for those who want to know what's happening without diving into the source code. A Notes section giving an overview of the algorithm has been added to the docstring http://docs.scipy.org/numpy/docs/numpy.linalg.linalg.solve/. I didn't feel comfortable quoting directly from the LAPACK comments, so maybe someone else can look into adding more detail. Cheers, Scott From charlesr.harris at gmail.com Thu Nov 20 01:20:25 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 19 Nov 2008 23:20:25 -0700 Subject: [Numpy-discussion] question about the documentation of linalg.solve In-Reply-To: <6a17e9ee0811192158v7e39777bxa9508a88f4941a3e@mail.gmail.com> References: <49242D59.9080007@american.edu> <4924688F.3010408@american.edu> <1227133224.2923.4.camel@localhost.localdomain> <6a17e9ee0811192158v7e39777bxa9508a88f4941a3e@mail.gmail.com> Message-ID: On Wed, Nov 19, 2008 at 10:58 PM, Scott Sinclair wrote: > 2008/11/20 Charles R Harris : > > > > On Wed, Nov 19, 2008 at 3:20 PM, Fabrice Silva > > wrote: > >> > >> Le mercredi 19 novembre 2008 ? 14:27 -0500, Alan G Isaac a ?crit : > >> > So my question is not just what is the algorithm > >> > but also, what is the documentation goal? > >> > >> Concerning the algorithm (only): > >> in Joshua answer, you have might have seen that solve is a wrapper to > >> lapack routines *gesv (z* or d* depending on the input type). > > > > Which, IIRC, calls *getrf to get the LU factorization of the lhs matrix > A. > > Here: > > > > * DGESV computes the solution to a real system of linear equations > > * A * X = B, > > * where A is an N-by-N matrix and X and B are N-by-NRHS matrices. > > > > * > > * The LU decomposition with partial pivoting and row interchanges is > > * used to factor A as > > * A = P * L * U, > > * where P is a permutation matrix, L is unit lower triangular, and U is > > * upper triangular. The factored form of A is then used to solve the > > > > * system of equations A * X = B. > > * > > It's not always fun to read the code in order to find out what a > function does. So I guess the documentation goal is to eventually add > sufficient detail, for those who want to know what's happening without > diving into the source code. > > A Notes section giving an overview of the algorithm has been added to > the docstring http://docs.scipy.org/numpy/docs/numpy.linalg.linalg.solve/. > > I didn't feel comfortable quoting directly from the LAPACK comments, > so maybe someone else can look into adding more detail. > Looks fine to me. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From hoytak at cs.ubc.ca Thu Nov 20 02:23:16 2008 From: hoytak at cs.ubc.ca (Hoyt Koepke) Date: Wed, 19 Nov 2008 23:23:16 -0800 Subject: [Numpy-discussion] glibc memory corruption when running numpy.test() Message-ID: <4db580fd0811192323o2f971c8as6ec253e79e3220a6@mail.gmail.com> Hello, Sorry about this bug report.... I know how much us programmers like these kind of bugs :-/. In testing the latest svn version of numpy (6083), I get a memory corruption error: test_csingle (test_linalg.TestSolve) ... ok test_double (test_linalg.TestSolve) ... ok test_empty (test_linalg.TestSolve) ... ok Check that matrix type is preserved. ... ok Check that matrix type is preserved. ... ok test_nonarray (test_linalg.TestSolve) ... ok test_single (test_linalg.TestSolve) ... ok Ticket #652 ... *** glibc detected *** python: free(): invalid next size (fast): 0x0000000002c80ef0 *** ... and it hangs. I don't get any errors in my installation. The system is a 64bit intel xeon server running debian unstable, compiler is gcc-4.3.2 This *doesn't* happen with version 1.2.1. I do get some other test failures with that version, which I can post if you all think they are related, but they don't seem to be at first glance. I've attached the full log. If there is anything more you want me to do with this, I'd be happy to. Thanks! --Hoyt ++++++++++++++++++++++++++++++++++++++++++++++++ + Hoyt Koepke + University of Washington Department of Statistics + http://www.stat.washington.edu/~hoytak/ + hoytak at gmail.com ++++++++++++++++++++++++++++++++++++++++++ -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy_test.log.gz Type: application/x-gzip Size: 12623 bytes Desc: not available URL: From cournapeau at cslab.kecl.ntt.co.jp Thu Nov 20 02:33:47 2008 From: cournapeau at cslab.kecl.ntt.co.jp (David Cournapeau) Date: Thu, 20 Nov 2008 16:33:47 +0900 Subject: [Numpy-discussion] glibc memory corruption when running numpy.test() In-Reply-To: <4db580fd0811192323o2f971c8as6ec253e79e3220a6@mail.gmail.com> References: <4db580fd0811192323o2f971c8as6ec253e79e3220a6@mail.gmail.com> Message-ID: <1227166428.31807.3.camel@bbc8> On Wed, 2008-11-19 at 23:23 -0800, Hoyt Koepke wrote: > Hello, > > Sorry about this bug report.... I know how much us programmers like > these kind of bugs :-/. > > In testing the latest svn version of numpy (6083), I get a memory > corruption error: > > test_csingle (test_linalg.TestSolve) ... ok > test_double (test_linalg.TestSolve) ... ok > test_empty (test_linalg.TestSolve) ... ok > Check that matrix type is preserved. ... ok > Check that matrix type is preserved. ... ok > test_nonarray (test_linalg.TestSolve) ... ok > test_single (test_linalg.TestSolve) ... ok > Ticket #652 ... *** glibc detected *** python: free(): invalid next > size (fast): 0x0000000002c80ef0 *** > > ... and it hangs. > > I don't get any errors in my installation. The system is a 64bit > intel xeon server running debian unstable, compiler is gcc-4.3.2 Could you give us the complete build log (e.g. when build from scratch: remove the build directory and give us the log of python setup.py build &> build.log) ? You have tens of failures which are more likely due to a build problem, David From hoytak at cs.ubc.ca Thu Nov 20 02:41:26 2008 From: hoytak at cs.ubc.ca (Hoyt Koepke) Date: Wed, 19 Nov 2008 23:41:26 -0800 Subject: [Numpy-discussion] glibc memory corruption when running numpy.test() In-Reply-To: <1227166428.31807.3.camel@bbc8> References: <4db580fd0811192323o2f971c8as6ec253e79e3220a6@mail.gmail.com> <1227166428.31807.3.camel@bbc8> Message-ID: <4db580fd0811192341o7e89f630leaded5cf55357ee1@mail.gmail.com> Attached. I had a bunch of issues getting things to install with lapack and ATLAS. In the end I had to specify the following environment variables (+ appropriate command line options to ATLAS & lapack) to get it to work. If there's an easier way, let me know. export FLAGS='-march=core2 -mtune=core2 -m64' export CFLAGS="$FLAGS" export CPPFLAGS="$FLAGS" export CXXFLAGS="$FLAGS" export FFLAGS="$FLAGS" export F77FLAGS="$FLAGS" export LDFLAGS="$FLAGS" When compiling numpy, I had to manually add '-shared' to LDFLAGS to get it to work. Thanks! --Hoyt On Wed, Nov 19, 2008 at 11:33 PM, David Cournapeau wrote: > On Wed, 2008-11-19 at 23:23 -0800, Hoyt Koepke wrote: >> Hello, >> >> Sorry about this bug report.... I know how much us programmers like >> these kind of bugs :-/. >> >> In testing the latest svn version of numpy (6083), I get a memory >> corruption error: >> >> test_csingle (test_linalg.TestSolve) ... ok >> test_double (test_linalg.TestSolve) ... ok >> test_empty (test_linalg.TestSolve) ... ok >> Check that matrix type is preserved. ... ok >> Check that matrix type is preserved. ... ok >> test_nonarray (test_linalg.TestSolve) ... ok >> test_single (test_linalg.TestSolve) ... ok >> Ticket #652 ... *** glibc detected *** python: free(): invalid next >> size (fast): 0x0000000002c80ef0 *** >> >> ... and it hangs. >> >> I don't get any errors in my installation. The system is a 64bit >> intel xeon server running debian unstable, compiler is gcc-4.3.2 > > Could you give us the complete build log (e.g. when build from scratch: > remove the build directory and give us the log of python setup.py build > &> build.log) ? > > You have tens of failures which are more likely due to a build problem, > > David > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- ++++++++++++++++++++++++++++++++++++++++++++++++ + Hoyt Koepke + University of Washington Department of Statistics + http://www.stat.washington.edu/~hoytak/ + hoytak at gmail.com ++++++++++++++++++++++++++++++++++++++++++ -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log.gz Type: application/x-gzip Size: 6346 bytes Desc: not available URL: From hoytak at cs.ubc.ca Thu Nov 20 02:44:04 2008 From: hoytak at cs.ubc.ca (Hoyt Koepke) Date: Wed, 19 Nov 2008 23:44:04 -0800 Subject: [Numpy-discussion] glibc memory corruption when running numpy.test() In-Reply-To: <4db580fd0811192341o7e89f630leaded5cf55357ee1@mail.gmail.com> References: <4db580fd0811192323o2f971c8as6ec253e79e3220a6@mail.gmail.com> <1227166428.31807.3.camel@bbc8> <4db580fd0811192341o7e89f630leaded5cf55357ee1@mail.gmail.com> Message-ID: <4db580fd0811192344q1056c6ceh6b099d5ce653425@mail.gmail.com> Sorry; I also added '-fPIC' to the compile flags, though it may work without having it there. I just got errors related to not having it and so threw it at everything... --Hoyt On Wed, Nov 19, 2008 at 11:41 PM, Hoyt Koepke wrote: > Attached. > > I had a bunch of issues getting things to install with lapack and > ATLAS. In the end I had to specify the following environment > variables (+ appropriate command line options to ATLAS & lapack) to > get it to work. If there's an easier way, let me know. > > > export FLAGS='-march=core2 -mtune=core2 -m64' > export CFLAGS="$FLAGS" > export CPPFLAGS="$FLAGS" > export CXXFLAGS="$FLAGS" > export FFLAGS="$FLAGS" > export F77FLAGS="$FLAGS" > export LDFLAGS="$FLAGS" > > > When compiling numpy, I had to manually add '-shared' to LDFLAGS to > get it to work. > > > Thanks! > --Hoyt > > > > On Wed, Nov 19, 2008 at 11:33 PM, David Cournapeau > wrote: >> On Wed, 2008-11-19 at 23:23 -0800, Hoyt Koepke wrote: >>> Hello, >>> >>> Sorry about this bug report.... I know how much us programmers like >>> these kind of bugs :-/. >>> >>> In testing the latest svn version of numpy (6083), I get a memory >>> corruption error: >>> >>> test_csingle (test_linalg.TestSolve) ... ok >>> test_double (test_linalg.TestSolve) ... ok >>> test_empty (test_linalg.TestSolve) ... ok >>> Check that matrix type is preserved. ... ok >>> Check that matrix type is preserved. ... ok >>> test_nonarray (test_linalg.TestSolve) ... ok >>> test_single (test_linalg.TestSolve) ... ok >>> Ticket #652 ... *** glibc detected *** python: free(): invalid next >>> size (fast): 0x0000000002c80ef0 *** >>> >>> ... and it hangs. >>> >>> I don't get any errors in my installation. The system is a 64bit >>> intel xeon server running debian unstable, compiler is gcc-4.3.2 >> >> Could you give us the complete build log (e.g. when build from scratch: >> remove the build directory and give us the log of python setup.py build >> &> build.log) ? >> >> You have tens of failures which are more likely due to a build problem, >> >> David >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> > > > > -- > > ++++++++++++++++++++++++++++++++++++++++++++++++ > + Hoyt Koepke > + University of Washington Department of Statistics > + http://www.stat.washington.edu/~hoytak/ > + hoytak at gmail.com > ++++++++++++++++++++++++++++++++++++++++++ > -- ++++++++++++++++++++++++++++++++++++++++++++++++ + Hoyt Koepke + University of Washington Department of Statistics + http://www.stat.washington.edu/~hoytak/ + hoytak at gmail.com ++++++++++++++++++++++++++++++++++++++++++ From cournapeau at cslab.kecl.ntt.co.jp Thu Nov 20 02:52:15 2008 From: cournapeau at cslab.kecl.ntt.co.jp (David Cournapeau) Date: Thu, 20 Nov 2008 16:52:15 +0900 Subject: [Numpy-discussion] glibc memory corruption when running numpy.test() In-Reply-To: <4db580fd0811192341o7e89f630leaded5cf55357ee1@mail.gmail.com> References: <4db580fd0811192323o2f971c8as6ec253e79e3220a6@mail.gmail.com> <1227166428.31807.3.camel@bbc8> <4db580fd0811192341o7e89f630leaded5cf55357ee1@mail.gmail.com> Message-ID: <1227167535.31807.7.camel@bbc8> On Wed, 2008-11-19 at 23:41 -0800, Hoyt Koepke wrote: > Attached. > > I had a bunch of issues getting things to install with lapack and > ATLAS. Which ones ? > In the end I had to specify the following environment > variables (+ appropriate command line options to ATLAS & lapack) to > get it to work. If there's an easier way, let me know. You should not do that, it won't work as you would expect. It is a good rule to assume that you should never set the *FLAGS variable unless you really know what you are doing. First, can you try without any blas/lapack (Do BLAS=None LAPACK=None ATLAS=None python setup.py ....) ? Also, please note that ATLAS 3.9.4 is a development version; you should use 3.8.2 instead, David From gael.varoquaux at normalesup.org Thu Nov 20 03:17:04 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 20 Nov 2008 09:17:04 +0100 Subject: [Numpy-discussion] question about the documentation of linalg.solve In-Reply-To: <6a17e9ee0811192158v7e39777bxa9508a88f4941a3e@mail.gmail.com> References: <49242D59.9080007@american.edu> <4924688F.3010408@american.edu> <1227133224.2923.4.camel@localhost.localdomain> <6a17e9ee0811192158v7e39777bxa9508a88f4941a3e@mail.gmail.com> Message-ID: <20081120081704.GB30020@phare.normalesup.org> On Thu, Nov 20, 2008 at 07:58:52AM +0200, Scott Sinclair wrote: > A Notes section giving an overview of the algorithm has been added to > the docstring http://docs.scipy.org/numpy/docs/numpy.linalg.linalg.solve/. I thank you very much for doing this, and I reckon many users should be grateful. This is the way forward to making numpy rock. Ga?l From hoytak at cs.ubc.ca Thu Nov 20 03:26:50 2008 From: hoytak at cs.ubc.ca (Hoyt Koepke) Date: Thu, 20 Nov 2008 00:26:50 -0800 Subject: [Numpy-discussion] glibc memory corruption when running numpy.test() In-Reply-To: <4db580fd0811200018m13e49588h3ccdd45586ad0c77@mail.gmail.com> References: <4db580fd0811192323o2f971c8as6ec253e79e3220a6@mail.gmail.com> <1227166428.31807.3.camel@bbc8> <4db580fd0811192341o7e89f630leaded5cf55357ee1@mail.gmail.com> <1227167535.31807.7.camel@bbc8> <4db580fd0811200016j4bcf9a94l65fd4b1a7ea0d84c@mail.gmail.com> <4db580fd0811200018m13e49588h3ccdd45586ad0c77@mail.gmail.com> Message-ID: <4db580fd0811200026n103c6cbfp5e04354f560d4d1c@mail.gmail.com> Hi, Sorry; my first message wasn't under 40 KB with the attachments, so here's the same message but with the log files at http://www.stat.washington.edu/~hoytak/logs.tar.bz2. > Which ones ? Sorry; ATLAS = 3.9.4 and lapack=3.2. I'll give 3.8.2 a shot per your advice. > You should not do that, it won't work as you would expect. It is a good > rule to assume that you should never set the *FLAGS variable unless you > really know what you are doing. Fair enough. In my case I was having some issues with 32 bit and 64 bit mismatches (I think that fftw defaulted to 32 bit), so I set the flags variables. I also wanted to get the extra few percent of performance by using the tuning flags. I'll back up a bit now before playing with them now, though. > First, can you try without any blas/lapack (Do BLAS=None LAPACK=None > ATLAS=None python setup.py ....) ? This now works in the sense that it doesn't hang. I still get a number of test failures, however (build + test logs attached). Thanks a lot for the help! --Hoyt ++++++++++++++++++++++++++++++++++++++++++++++++ + Hoyt Koepke + University of Washington Department of Statistics + http://www.stat.washington.edu/~hoytak/ + hoytak at gmail.com ++++++++++++++++++++++++++++++++++++++++++ From cournapeau at cslab.kecl.ntt.co.jp Thu Nov 20 03:45:21 2008 From: cournapeau at cslab.kecl.ntt.co.jp (David Cournapeau) Date: Thu, 20 Nov 2008 17:45:21 +0900 Subject: [Numpy-discussion] glibc memory corruption when running numpy.test() In-Reply-To: <4db580fd0811200026n103c6cbfp5e04354f560d4d1c@mail.gmail.com> References: <4db580fd0811192323o2f971c8as6ec253e79e3220a6@mail.gmail.com> <1227166428.31807.3.camel@bbc8> <4db580fd0811192341o7e89f630leaded5cf55357ee1@mail.gmail.com> <1227167535.31807.7.camel@bbc8> <4db580fd0811200016j4bcf9a94l65fd4b1a7ea0d84c@mail.gmail.com> <4db580fd0811200018m13e49588h3ccdd45586ad0c77@mail.gmail.com> <4db580fd0811200026n103c6cbfp5e04354f560d4d1c@mail.gmail.com> Message-ID: <1227170721.31807.21.camel@bbc8> On Thu, 2008-11-20 at 00:26 -0800, Hoyt Koepke wrote: > Hi, > > Sorry; my first message wasn't under 40 KB with the attachments, so > here's the same message but with the log files at > http://www.stat.washington.edu/~hoytak/logs.tar.bz2. > > > > Which ones ? > > Sorry; ATLAS = 3.9.4 and lapack=3.2. I'll give 3.8.2 a shot per your advice. Sorry, I meant which problems did you get when trying to build numpy with those ? Lapack 3.2 is really recent, and seems to use a new BLAS, which is likely not supported by ATLAS. But to be faire, that won't explain most failures you get. > > > You should not do that, it won't work as you would expect. It is a good > > rule to assume that you should never set the *FLAGS variable unless you > > really know what you are doing. > > Fair enough. In my case I was having some issues with 32 bit and 64 > bit mismatches (I think that fftw defaulted to 32 bit), so I set the > flags variables. I also wanted to get the extra few percent of > performance by using the tuning flags. I'll back up a bit now before > playing with them now, though. I honestly don't think those flags matter much in the case of numpy/scipy. In particular, using SSE and co automatically is simply impossible in numpy case, since the C code is very generic (non-aligned - non contiguous items) and the compiler has no way to know at compile time which cases are contiguous. FFTW support has been removed in recent scipy, so this won't be a problem anymore :) > This now works in the sense that it doesn't hang. I still get a > number of test failures, however (build + test logs attached). Those errors seem link to the flags you have been using. Some errors are really strange (4 vs 8 bytes types), but I don't see how it could be explained by a mismatch of 32 vs 64 bits machine code (to the best of my knowledge, you can't mix 32 and 64 bits machine code in one binary). Maybe a compiler bug when using -march flag. Please try building numpy wo BLAS/LAPACK and wo compiler flags first, to test that the bare configuration does work, and that the problems are not due to some bugs in your toolchain/OS/etc... The test suite should run without any failure in this case; then, we can work on the BLAS/LAPACK thing, cheers, David From hoytak at cs.ubc.ca Thu Nov 20 04:14:34 2008 From: hoytak at cs.ubc.ca (Hoyt Koepke) Date: Thu, 20 Nov 2008 01:14:34 -0800 Subject: [Numpy-discussion] glibc memory corruption when running numpy.test() In-Reply-To: <1227170721.31807.21.camel@bbc8> References: <4db580fd0811192323o2f971c8as6ec253e79e3220a6@mail.gmail.com> <1227166428.31807.3.camel@bbc8> <4db580fd0811192341o7e89f630leaded5cf55357ee1@mail.gmail.com> <1227167535.31807.7.camel@bbc8> <4db580fd0811200016j4bcf9a94l65fd4b1a7ea0d84c@mail.gmail.com> <4db580fd0811200018m13e49588h3ccdd45586ad0c77@mail.gmail.com> <4db580fd0811200026n103c6cbfp5e04354f560d4d1c@mail.gmail.com> <1227170721.31807.21.camel@bbc8> Message-ID: <4db580fd0811200114v23ddb348x25b8382366f0b4e6@mail.gmail.com> Hi, > I honestly don't think those flags matter much in the case of > numpy/scipy. In particular, using SSE and co automatically is simply > impossible in numpy case, since the C code is very generic (non-aligned > - non contiguous items) and the compiler has no way to know at compile > time which cases are contiguous. Good to know. I'll try to surpress my desire to optimize and not care about them :-). > Those errors seem link to the flags you have been using. Some errors > are really strange (4 vs 8 bytes types), but I don't see how it could be > explained by a mismatch of 32 vs 64 bits machine code (to the best of my > knowledge, you can't mix 32 and 64 bits machine code in one binary). > Maybe a compiler bug when using -march flag. > > Please try building numpy wo BLAS/LAPACK and wo compiler flags first, to > test that the bare configuration does work, and that the problems are > not due to some bugs in your toolchain/OS/etc... The test suite should > run without any failure in this case; then, we can work on the > BLAS/LAPACK thing, I believe the logs I attached (or rather linked to) don't involve atlas or lapack or any compiler flags. I agree that they are strange and I may have something weird floating around. It's getting late here, so I'll double check everything in the morning and may try to run gcc's test suite to verify that isn't the problem. Thanks again! --Hoyt > cheers, > > David > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- ++++++++++++++++++++++++++++++++++++++++++++++++ + Hoyt Koepke + University of Washington Department of Statistics + http://www.stat.washington.edu/~hoytak/ + hoytak at gmail.com ++++++++++++++++++++++++++++++++++++++++++ From cournape at gmail.com Thu Nov 20 04:29:41 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 20 Nov 2008 18:29:41 +0900 Subject: [Numpy-discussion] glibc memory corruption when running numpy.test() In-Reply-To: <4db580fd0811200114v23ddb348x25b8382366f0b4e6@mail.gmail.com> References: <4db580fd0811192323o2f971c8as6ec253e79e3220a6@mail.gmail.com> <1227166428.31807.3.camel@bbc8> <4db580fd0811192341o7e89f630leaded5cf55357ee1@mail.gmail.com> <1227167535.31807.7.camel@bbc8> <4db580fd0811200016j4bcf9a94l65fd4b1a7ea0d84c@mail.gmail.com> <4db580fd0811200018m13e49588h3ccdd45586ad0c77@mail.gmail.com> <4db580fd0811200026n103c6cbfp5e04354f560d4d1c@mail.gmail.com> <1227170721.31807.21.camel@bbc8> <4db580fd0811200114v23ddb348x25b8382366f0b4e6@mail.gmail.com> Message-ID: <5b8d13220811200129q2f48a338xc507d92a8ae1559e@mail.gmail.com> On Thu, Nov 20, 2008 at 6:14 PM, Hoyt Koepke wrote: > > I believe the logs I attached (or rather linked to) don't involve > atlas or lapack or any compiler flags. Ah, yes, sorry, I missed the build.log one. The only thing which surprises me a bit is the size of long double (I have never seen it to be 16 bytes on Linux, but in theory, it should not matter as long as the detected size is correct; I don't have a 64 bits in handy ATM, will check at home). I must say I don't have any more ideas on what could cause this mess. Did you clean the install directory and the build directory before building ? David From charlesr.harris at gmail.com Thu Nov 20 04:40:41 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 20 Nov 2008 02:40:41 -0700 Subject: [Numpy-discussion] glibc memory corruption when running numpy.test() In-Reply-To: <5b8d13220811200129q2f48a338xc507d92a8ae1559e@mail.gmail.com> References: <4db580fd0811192323o2f971c8as6ec253e79e3220a6@mail.gmail.com> <1227166428.31807.3.camel@bbc8> <4db580fd0811192341o7e89f630leaded5cf55357ee1@mail.gmail.com> <1227167535.31807.7.camel@bbc8> <4db580fd0811200016j4bcf9a94l65fd4b1a7ea0d84c@mail.gmail.com> <4db580fd0811200018m13e49588h3ccdd45586ad0c77@mail.gmail.com> <4db580fd0811200026n103c6cbfp5e04354f560d4d1c@mail.gmail.com> <1227170721.31807.21.camel@bbc8> <4db580fd0811200114v23ddb348x25b8382366f0b4e6@mail.gmail.com> <5b8d13220811200129q2f48a338xc507d92a8ae1559e@mail.gmail.com> Message-ID: On Thu, Nov 20, 2008 at 2:29 AM, David Cournapeau wrote: > On Thu, Nov 20, 2008 at 6:14 PM, Hoyt Koepke wrote: > > > > > I believe the logs I attached (or rather linked to) don't involve > > atlas or lapack or any compiler flags. > > Ah, yes, sorry, I missed the build.log one. The only thing which > surprises me a bit is the size of long double (I have never seen it to > be 16 bytes on Linux, but in theory, it should not matter as long as I believe that's normal on 64 bit machines - long doubles are filled out in the natural word size. Thus 80 goes to 64 + 32 on 32 bit machines, 64 + 64 on 64 bit machines. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From meine at informatik.uni-hamburg.de Thu Nov 20 05:11:14 2008 From: meine at informatik.uni-hamburg.de (Hans Meine) Date: Thu, 20 Nov 2008 11:11:14 +0100 Subject: [Numpy-discussion] linalg.norm missing an 'axis' kwarg?! Message-ID: <200811201111.15930.meine@informatik.uni-hamburg.de> Hi, I have a 2D matrix comprising a sequence of vectors, and I want to compute the norm of each vector. np.linalg.norm seems to be the best bet, but it does not support axis. Wouldn't this be a nice feature? Greetings, Hans From ezindy at gmail.com Thu Nov 20 05:15:21 2008 From: ezindy at gmail.com (Egor Zindy) Date: Thu, 20 Nov 2008 10:15:21 +0000 Subject: [Numpy-discussion] ANN: numpy.i - added managed deallocation to ARGOUTVIEW_ARRAY1 (ARGOUTVIEWM_ARRAY1) In-Reply-To: <49231EB3.8060802@noaa.gov> References: <491F8F4A.30009@gmail.com> <49231156.1060209@gmail.com> <49231EB3.8060802@noaa.gov> Message-ID: <492538B9.10202@gmail.com> Christopher Barker wrote: > thanks! good stuff. > It would be great if you could put that in the numpy (scipy?) wiki > though, so more folks will find it. > > -Chris > Hello Chris, no problems, you are absolutely right, this is where the documents will have to eventually end up for maximum visibility. There is already a bit of numpy + SWIG in the cookbook, but that could well have been written before numpy.i http://www.scipy.org/Cookbook/SWIG_and_NumPy For added exposure, there is also the numpy.i document written by Bill Spotz. That could do with more examples (separate document maybe). The lack of examples is what prompted me to write my wiki in the first place! I've also updated my other document with a more credible ARGOUTVIEW example. The part about numpy+SWIG+MinGW is now dwarfed by the body of numpy.i examples (not necessary a good thing). Plus all the examples are ARRAY1, people have also asked for some ARRAY2 / ARRAY3 examples and FORTRAN arrays (which I don't know anything about I'm afraid). Here are the two wikis so far: http://code.google.com/p/ezwidgets/wiki/NumpySWIGMinGW http://code.google.com/p/ezwidgets/wiki/NumpyManagedMemory Still looking for a good name for my "argout arrays with managed deallocation"... After writing my ARGOUTVIEW example, I am not even sure my addition to numpy.i should be called a "view" anymore. How does ARGOUTMAD_ARRAY sound? (for Managed Allocation / Deallocation) :-) Regards, Egor From meine at informatik.uni-hamburg.de Thu Nov 20 05:54:52 2008 From: meine at informatik.uni-hamburg.de (Hans Meine) Date: Thu, 20 Nov 2008 11:54:52 +0100 Subject: [Numpy-discussion] linalg.norm missing an 'axis' kwarg?! In-Reply-To: <200811201111.15930.meine@informatik.uni-hamburg.de> References: <200811201111.15930.meine@informatik.uni-hamburg.de> Message-ID: <200811201154.53007.meine@informatik.uni-hamburg.de> On Thursday 20 November 2008 11:11:14 Hans Meine wrote: > I have a 2D matrix comprising a sequence of vectors, and I want to compute > the norm of each vector. np.linalg.norm seems to be the best bet, but it > does not support axis. Wouldn't this be a nice feature? Here's a basic implementation. docstring + tests not updated yet, also I wonder whether axis should be the first argument, but that could create compatibility problems. Ciao, Hans -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy_norm_axis.diff Type: text/x-patch Size: 1597 bytes Desc: not available URL: From rob at roryoung.co.uk Thu Nov 20 07:28:45 2008 From: rob at roryoung.co.uk (Robert Young) Date: Thu, 20 Nov 2008 12:28:45 +0000 Subject: [Numpy-discussion] Reduced row echelon form In-Reply-To: <9457e7c80811190214wac6b9f2k13f763992b874b6c@mail.gmail.com> References: <32a1c320811181221j1320679y35a0b587d55002c9@mail.gmail.com> <9457e7c80811190214wac6b9f2k13f763992b874b6c@mail.gmail.com> Message-ID: <32a1c320811200428h39e06d4dg264ea9b3475dc3ee@mail.gmail.com> Excellent, thank you all for your input. I don't actually have a specific problem that I need it for I just wanted to be able to work through some book examples. I'll take a look at Sage and Sympy. Thanks Rob On Wed, Nov 19, 2008 at 10:14 AM, St?fan van der Walt wrote: > Hi Robert, > > 2008/11/18 Robert Young : > > Is there a method in NumPy that reduces a matrix to it's reduced row > echelon > > form? I'm brand new to both NumPy and linear algebra, and I'm not quite > sure > > where to look. > > I use the Sympy package. It is small, easy to install, runs on pure > Python, and gets the job done: > > >>> x = np.random.random((3,3)) > >>> import sympy > >>> sympy.Matrix(x).rref() > ([1, 0, 0] > [0, 1, 0] > [0, 0, 1], [0, 1, 2]) > > If you are interested, I can also provide you with a version that runs > under pure NumPy, using the LU-decomposition. > > Cheers > St?fan > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Thu Nov 20 09:15:47 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 20 Nov 2008 09:15:47 -0500 Subject: [Numpy-discussion] question about the documentation of linalg.solve In-Reply-To: <6a17e9ee0811192158v7e39777bxa9508a88f4941a3e@mail.gmail.com> References: <49242D59.9080007@american.edu> <4924688F.3010408@american.edu> <1227133224.2923.4.camel@localhost.localdomain> <6a17e9ee0811192158v7e39777bxa9508a88f4941a3e@mail.gmail.com> Message-ID: <49257113.3000807@american.edu> On 11/20/2008 12:58 AM Scott Sinclair apparently wrote: > A Notes section giving an overview of the algorithm has been added to > the docstring http://docs.scipy.org/numpy/docs/numpy.linalg.linalg.solve/. You beat me to it. (I was awaiting editing privileges, which I just received.) Thanks! Alan Isaac From aisaac at american.edu Thu Nov 20 09:19:51 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 20 Nov 2008 09:19:51 -0500 Subject: [Numpy-discussion] linalg.norm missing an 'axis' kwarg?! In-Reply-To: <200811201111.15930.meine@informatik.uni-hamburg.de> References: <200811201111.15930.meine@informatik.uni-hamburg.de> Message-ID: <49257207.60903@american.edu> On 11/20/2008 5:11 AM Hans Meine apparently wrote: > I have a 2D matrix comprising a sequence of vectors, and I want to compute the > norm of each vector. np.linalg.norm seems to be the best bet, but it does not > support axis. Wouldn't this be a nice feature? Of possible use until then: http://docs.scipy.org/doc/numpy/reference/generated/numpy.apply_along_axis.html Alan Isaac From pgmdevlist at gmail.com Thu Nov 20 09:24:07 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 20 Nov 2008 09:24:07 -0500 Subject: [Numpy-discussion] Priority rules between 0d array and np.scalar Message-ID: <30DF3BCD-D9FB-4316-A90F-114280D7B037@gmail.com> All, That time of a month again: could anybody (and I'm thinking about you in particular, Travis O.) can explain me what the priority rules are between a 0d ndarray and a np.scalar ? OK, I understand there are no real rules. However, the bug I was describing in a previous thread (www.mail-archive.com/numpy-discussion at scipy.org /msg13235.html) is still around: When multiplying/adding a np.scalar and ma.masked, the result varies depending on the order of the arguments as well as on their dtype. (Keep in mind that ma.masked is a subclass 0d ndarray of value 0 and dtype np.float64, with a __array_priority__ of 15). ma.masked * np.float32(1) => ma.masked np.float32(1) * ma.masked => ma.masked ma.masked * np.float64(1) => ma.masked np.floa64(1) * ma.masked => 0 My understanding is that for the first 2 operations, ma.masked takes over because it has the higher dtype. In that case, we use the rules defined in MaskedArray for multiplication (either __mul__ or __array_wrap__). For the 3rd and 4th operations, the two arguments have the same dtype and it looks like we're switching to a different priority rule. I would have expected ma.masked to take over in both cases, because a MaskedArray has a higher __array_priority__ than a ndarray or a np.scalar. That's not the case: the fact that ma.masked is a subclass of ndarray is not recognized... I hope I didn't lose anybody in my description. A ticket has recently been filled about the same issue: http://scipy.org/scipy/numpy/ticket/826 Looking forward to hearing from y'all P. From dfranci at seas.upenn.edu Thu Nov 20 09:33:41 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Thu, 20 Nov 2008 09:33:41 -0500 Subject: [Numpy-discussion] unpickle In-Reply-To: <3d375d730811192040j68b8279dr352763572257bd8f@mail.gmail.com> References: <9fddf64a0811192036p6ead5aado93b0d626dc56a097@mail.gmail.com> <3d375d730811192040j68b8279dr352763572257bd8f@mail.gmail.com> Message-ID: <9fddf64a0811200633x5aff8237v97c741cac97f7e1f@mail.gmail.com> > > This, and your previous question, are mostly off-topic for > numpy-discussion. You may want to ask such questions in the future on > more general Python mailing lists. > > http://www.python.org/community/lists/ > > -- > Robert Kern > > Yes of course. Sorry for the spam. The numpy list is just so helpful :) No prboblem-- I'll use the python list for this stuff. -Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Thu Nov 20 09:44:54 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 20 Nov 2008 09:44:54 -0500 Subject: [Numpy-discussion] Numpy 1.2.2 ? Message-ID: <72F835D2-9BA2-4216-BCF1-34764E35E491@gmail.com> All, I've recently introduced some little fixes in the SVN version of numpy.ma.core Is there any plan for a 1.2.2 release, or will we directly switch to 1.3.0 ? Do I need to backport these fixes to 12x ? Thx a lot in advance P. From jh at physics.ucf.edu Thu Nov 20 09:55:40 2008 From: jh at physics.ucf.edu (jh at physics.ucf.edu) Date: Thu, 20 Nov 2008 15:55:40 +0100 Subject: [Numpy-discussion] question about the documentation of linalg.solve In-Reply-To: (numpy-discussion-request@scipy.org) References: Message-ID: On Thu, Nov 20, 2008 at 07:58:52AM +0200, Scott Sinclair wrote: > A Notes section giving an overview of the algorithm has been added to > the docstring http://docs.scipy.org/numpy/docs/numpy.linalg.linalg.solve/. Doc goals: We would like each function and class to have docs that compare favorably to those of all our competitors, and some (notably Matlab) have very good docs. For our effort, this means (at the very least): - readable by a user one level below the likely user of the item (i.e., they can read the doc and at least learn the type of use it might be for, so that in the future they know where to go) - complete with regard to both inputs/outputs and methodology - referenced to the literature, particularly in cases where the methods employed impose limitations for certain cases - both simple examples and some that show more complex cases, particularly if the item is designed to work with other routines There was a big push over the summer, and a large number of people pitched in, plowing through the list of undocumented functions and writing. However, many of the functions that remain are not amenable to this approach because they require specialist attention to document methodology that not everyone is familiar with. This will be a dominant issue when we start documenting scipy. So (everyone), if you identify a routine in your specialty that requires a doc, please either hop over to docs.scipy.org and start writing, or post a message on scipy-dev at scipy.org asking to team up with a writer. For convenience, the doc wiki contains links to the sources so you can easily look at the functions you are working on. Even simply adding something in the Notes section about the method (as was done in this case), putting in a a reference, or giving a non-trivial example will provide material for other writers to flesh out a full doc for the routine. Thanks everyone for your help! --jh-- From jdh2358 at gmail.com Thu Nov 20 11:40:41 2008 From: jdh2358 at gmail.com (John Hunter) Date: Thu, 20 Nov 2008 10:40:41 -0600 Subject: [Numpy-discussion] contiguous regions Message-ID: <88e473830811200840j745146d4l14033e23eb779fc1@mail.gmail.com> I frequently want to break a 1D array into regions above and below some threshold, identifying all such subslices where the contiguous elements are above the threshold. I have two related implementations below to illustrate what I am after. The first "crossings" is rather naive in that it doesn't handle the case where an element is equal to the threshold (assuming zero for the threshold in the examples below). The second is correct (I think) but is pure python. Has anyone got a nifty numpy solution for this? import numpy as np import matplotlib.pyplot as plt t = np.arange(0.0123, 2, 0.05) s = np.sin(2*np.pi*t) def crossings(x): """ return a list of (above, ind0, ind1). ind0 and ind1 are regions such that the slice x[ind0:ind1]>0 when above is True and x[ind0:ind1]<0 when above is False """ N = len(x) crossings = x[:-1]*x[1:]<0 ind = np.nonzero(crossings)[0]+1 lastind = 0 data = [] for i in range(len(ind)): above = x[lastind]>0 thisind = ind[i] data.append((above, lastind, thisind)) lastind = thisind # put the one past the end index if not already in if len(data) and data[-1]!=N-1: data.append((not data[-1][0], thisind, N)) return data def contiguous_regions(mask): """ return a list of (ind0, ind1) such that mask[ind0:ind1].all() is True and we cover all such regions """ in_region = None boundaries = [] for i, val in enumerate(mask): if in_region is None and val: in_region = i elif in_region is not None and not val: boundaries.append((in_region, i)) in_region = None if in_region is not None: boundaries.append((in_region, i+1)) return boundaries fig = plt.figure() ax = fig.add_subplot(111) ax.set_title('using crossings') ax.plot(t, s, 'o') ax.axhline(0) for above, ind0, ind1 in crossings(s): if above: color='green' else: color = 'red' tslice = t[ind0:ind1] ax.axvspan(tslice[0], tslice[-1], facecolor=color, alpha=0.5) fig = plt.figure() ax = fig.add_subplot(111) ax.set_title('using contiguous regions') ax.plot(t, s, 'o') ax.axhline(0) for ind0, ind1 in contiguous_regions(s>0): tslice = t[ind0:ind1] ax.axvspan(tslice[0], tslice[-1], facecolor='green', alpha=0.5) for ind0, ind1 in contiguous_regions(s<0): tslice = t[ind0:ind1] ax.axvspan(tslice[0], tslice[-1], facecolor='red', alpha=0.5) plt.show() From gregor.thalhammer at gmail.com Thu Nov 20 12:24:18 2008 From: gregor.thalhammer at gmail.com (Gregor Thalhammer) Date: Thu, 20 Nov 2008 18:24:18 +0100 Subject: [Numpy-discussion] contiguous regions In-Reply-To: <88e473830811200840j745146d4l14033e23eb779fc1@mail.gmail.com> References: <88e473830811200840j745146d4l14033e23eb779fc1@mail.gmail.com> Message-ID: <49259D42.4030702@googlemail.com> John Hunter schrieb: > I frequently want to break a 1D array into regions above and below > some threshold, identifying all such subslices where the contiguous > elements are above the threshold. I have two related implementations > below to illustrate what I am after. The first "crossings" is rather > naive in that it doesn't handle the case where an element is equal to > the threshold (assuming zero for the threshold in the examples below). > The second is correct (I think) but is pure python. Has anyone got a > nifty numpy solution for this? > > import numpy as np > import matplotlib.pyplot as plt > t = np.arange(0.0123, 2, 0.05) > s = np.sin(2*np.pi*t) > > here my proposal, needs some polishing: mask = (s>0).astype(int8) d = diff(mask) idx, = d.nonzero() #now handle the cases that s is above threshold at beginning or end of sequence if d[idx[0]] == -1: idx = r_[0, idx] if d[idx[-1]] == 1: idx = r_[idx, len(s)] idx.shape = (-1,2) Gregor From hanni.ali at gmail.com Thu Nov 20 12:54:54 2008 From: hanni.ali at gmail.com (Hanni Ali) Date: Thu, 20 Nov 2008 17:54:54 +0000 Subject: [Numpy-discussion] can't build numpy 1.2.0 under python 2.6 (windows-amd64) using VS9 In-Reply-To: <48F13526.9040300@gmail.com> References: <3633FE192D10364D889D84EB77185F3E73AE4D@rxrex1.ssaris.com> <200810091707.38689.lists_ravi@lavabit.com> <48EEFEF2.9070208@ar.media.kyoto-u.ac.jp> <200810101000.48555.lists_ravi@lavabit.com> <48EF5F99.3010403@ar.media.kyoto-u.ac.jp> <48EF6BF0.9080708@gmail.com> <48EF6F2C.6040509@ar.media.kyoto-u.ac.jp> <48EF8B76.7010200@gmail.com> <48F097BA.1080100@ar.media.kyoto-u.ac.jp> <48F13526.9040300@gmail.com> Message-ID: <789d27b10811200954u35f7a312lf4c0937b8a5b9dee@mail.gmail.com> Hi All, I have reached the point where I really need to get some sort of optimised/accelerated BLAS/LAPACK for windows 64 so have been trying a few different things out to see whether I can get anything usable, today i stumbled across this: http://icl.cs.utk.edu/lapack-for-windows/index.html Has anyone used this before, I plan on seeing where it takes me in the morning, so I will report back if i get it working with numpy. Regards, Hanni 2008/10/12 Michael Abshoff > David Cournapeau wrote: > > > Michael Abshoff wrote: > > Hi David, > > >> Sure, but there isn't even a 32 bit gcc out there that can produce 64 > >> bit PE binaries (aside from the MinGW fork that AFAIK does not work > >> particularly well and allegedly has issues with the cleanliness of some > >> of the code which is allegedly the reason that the official MinGW people > >> will not touch the code base) . > > > > The biggest problem is that officially, there is still no gcc 4 release > > for mingw. I saw a gcc 4 section in cygwin, though, so maybe it is about > > to be released. There is no support at all for 64 bits PE in the 3 serie. > > Yes, you are correct and I was wrong. I just checked out the mingw-64 > project and there has been a lot of activity the last couple month, > including a patch to build pthread-win32 in 64 bit mode. > > > I think binutils officially support 64 bits PE (I can build a linux > > hosted binutils for 64 bits PE with x86_64-pc-mingw32 as a target, and > > it seems to work: disassembling and co). gcc 4 can work, too (you can > > build a bootstrap C compiler which targets windows 64 bits IICR). The > > biggest problem AFAICS is the runtime (mingw64, which is indeed legally > > murky). > > I would really like to find the actual reason *why* the legal status of > the 64 bit MinGW port is murky (To my knowledge it has to do with taking > code from the MS Platform toolkit - but that is conjecture), so I guess > I will do the obvious thing and ask on the MinGW list :) > > >> Ok, that is a concern I usually do not have since I tend to build my own > >> Python :). > > > > I would say that if you can build python by yourself on windows, you can > > certainly build numpy by yourself :) It took me quite a time to be able > > to build python on windows by myself from scratch. > > Sure, I do see your point. > > Accidentally someone posted about > > http://debian-interix.net/ > > on the sage-windows list today. It offers a gcc 4.2 toolchain and AFAIK > there is at least a patch set for ATLAS to make it work on Interix. > > > cheers, > > > > David > > Cheers, > > Michael > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmay31 at gmail.com Thu Nov 20 14:22:38 2008 From: rmay31 at gmail.com (Ryan May) Date: Thu, 20 Nov 2008 13:22:38 -0600 Subject: [Numpy-discussion] numpy.loadtxt requires seek()? Message-ID: <4925B8FE.7080708@gmail.com> Hi, Does anyone know why numpy.loadtxt(), in checking the validity of a filehandle, checks for the seek() method, which appears to have no bearing on whether an object will work? I'm trying to use loadtxt() directly with the file-like object returned by urllib2.urlopen(). If I change the check for 'seek' to one for 'readline', using the urlopen object works with a hitch. As far as I can tell, all the "filehandle" object needs to meet is: 1) Have a readline() method so that loadtxt can skip the first N lines and read the first line of data 2) Be compatible with itertools.chain() (should be any iterable) At a minimum, I'd ask to change the check for 'seek' to one for 'readline'. On a bit deeper thought, it would seem that loadtxt would work with any iterable that returns individual lines. I'd like then to change the calls to readline() to just getting the next object from the iterable (iter.next() ?) and change the check for a file-like object to just a check for an iterable. In fact, we could use the iter() builtin to convert whatever got passed. That would give automatically a next() method and would raise a TypeError if it's incompatible. Thoughts? I'm willing to write up the patch for either . Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From stefan at sun.ac.za Thu Nov 20 14:41:25 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 20 Nov 2008 21:41:25 +0200 Subject: [Numpy-discussion] numpy.loadtxt requires seek()? In-Reply-To: <4925B8FE.7080708@gmail.com> References: <4925B8FE.7080708@gmail.com> Message-ID: <9457e7c80811201141h26d9fd45j23e0b93f077111e3@mail.gmail.com> 2008/11/20 Ryan May : > Does anyone know why numpy.loadtxt(), in checking the validity of a > filehandle, checks for the seek() method, which appears to have no > bearing on whether an object will work? I think this is simply a naive mistake on my part. I was looking for a way to identify files; your patch would be welcome. Cheers St?fan From meine at informatik.uni-hamburg.de Thu Nov 20 13:45:47 2008 From: meine at informatik.uni-hamburg.de (Hans Meine) Date: Thu, 20 Nov 2008 19:45:47 +0100 Subject: [Numpy-discussion] linalg.norm missing an 'axis' kwarg?! In-Reply-To: <49257207.60903@american.edu> References: <200811201111.15930.meine@informatik.uni-hamburg.de> <49257207.60903@american.edu> Message-ID: <200811201945.50343.meine@informatik.uni-hamburg.de> On Donnerstag 20 November 2008, Alan G Isaac wrote: > On 11/20/2008 5:11 AM Hans Meine apparently wrote: > > I have a 2D matrix comprising a sequence of vectors, and I want to > > compute the norm of each vector. np.linalg.norm seems to be the best > > bet, but it does not support axis. Wouldn't this be a nice feature? > > Of possible use until then: > http://docs.scipy.org/doc/numpy/reference/generated/numpy.apply_along_axis.html Thanks for the hint, yes as you see I have already patched norm() in the meantime. BTW: Wow, this is an exceptionally nice doc page, sphinx + scipy's doc system really rocks! :-) Ciao, / / .o. /--/ ..o / / ANS ooo -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 197 bytes Desc: This is a digitally signed message part. URL: From rmay31 at gmail.com Thu Nov 20 14:58:33 2008 From: rmay31 at gmail.com (Ryan May) Date: Thu, 20 Nov 2008 13:58:33 -0600 Subject: [Numpy-discussion] numpy.loadtxt requires seek()? In-Reply-To: <9457e7c80811201141h26d9fd45j23e0b93f077111e3@mail.gmail.com> References: <4925B8FE.7080708@gmail.com> <9457e7c80811201141h26d9fd45j23e0b93f077111e3@mail.gmail.com> Message-ID: <4925C169.4090702@gmail.com> St?fan van der Walt wrote: > 2008/11/20 Ryan May : >> Does anyone know why numpy.loadtxt(), in checking the validity of a >> filehandle, checks for the seek() method, which appears to have no >> bearing on whether an object will work? > > I think this is simply a naive mistake on my part. I was looking for > a way to identify files; your patch would be welcome. I've attached a simple patch that changes the check for seek() to a check for readline(). I'll punt on my idea of just using iterators, since that seems like slightly greater complexity for no gain. (I'm not sure how many people end up with data in a list of strings and wish they could pass that to loadtxt). While you're at it, would you commit my patch to add support for bzipped files as well (attached)? Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: loadtxt_bzip2_support.diff URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: loadtxt_filecheck_readline.diff URL: From doutriaux1 at llnl.gov Thu Nov 20 18:26:27 2008 From: doutriaux1 at llnl.gov (=?UTF-8?Q?Charles_=D8=B3=D9=85=D9=8A=D8=B1_Doutriaux?=) Date: Thu, 20 Nov 2008 15:26:27 -0800 Subject: [Numpy-discussion] numpy.ma.allclose bug Message-ID: The following shows a bug in numpy.ma.allclose: import numpy import numpy.ma a = numpy.arange(100) b=numpy.reshape(a,(10,10)) print b c=numpy.ma.masked_greater(b,98) print c.count() numpy.ma.allclose(b,1) numpy.ma.allclose(c,1) Since c is masked it fails I think it should pass returning either False or True, not sure what it should return in case all the elements are equal to 1 (except of course the masked one) Note that the following works: numpy.ma.allclose(c,numpy.ma.ones(c.shape)) So I'm good for now Thanks, C> From hanni.ali at gmail.com Fri Nov 21 04:37:35 2008 From: hanni.ali at gmail.com (Hanni Ali) Date: Fri, 21 Nov 2008 09:37:35 +0000 Subject: [Numpy-discussion] Building numpy on Windows 64-bit BLAS/LAPACK Message-ID: <789d27b10811210137x5fc39e01ja1aa5d8fd7c61b45@mail.gmail.com> Hi, I have spent some time trying to use different methods to build numpyon Windows 64bit with a version of BLAS/LAPACK other than the inbuilt one (no slur on the inbuilt one it is excellent, I am simply attempting to see if there is any alternative with better performance). The most recent i have tried is from here: http://icl.cs.utk.edu/lapack-for-windows/index.html I used the installer and set the site.cfg file up in the following manner: [blas] library_dirs = C:\Program Files (x86)\University Of Tennessee\LAPACK 3.1.1\lib\x64 blas_libs = BLAS include_dirs = C:\Program Files (x86)\University Of Tennessee\LAPACK 3.1.1\src [lapack] language = f77 lapack_libs = LAPACK library_dirs = C:\Program Files (x86)\University Of Tennessee\LAPACK 3.1.1\lib\x64 include_dirs = C:\Program Files (x86)\University Of Tennessee\LAPACK 3.1.1\src I have installed both intel's and pgi's fortran compilers in an attempt to get numpy to complete a compilation using these libraries, but have not been successful, both trip up at this point: lapack_litemodule.obj : warning LNK4197: export 'initlapack_lite' specified mult iple times; using first specification Creating library build\temp.win-amd64-2.6\Release\numpy\linalg\ lapack_lite.li b and object build\temp.win-amd64-2.6\Release\numpy\linalg\lapack_lite.exp lapack_litemodule.obj : error LNK2019: unresolved external symbol dgeev_ referen ced in function lapack_lite_dgeev lapack_litemodule.obj : error LNK2019: unresolved external symbol dsyevd_ refere nced in function lapack_lite_dsyevd lapack_litemodule.obj : error LNK2019: unresolved external symbol zheevd_ refere nced in function lapack_lite_zheevd lapack_litemodule.obj : error LNK2019: unresolved external symbol dgelsd_ refere nced in function lapack_lite_dgelsd lapack_litemodule.obj : error LNK2019: unresolved external symbol dgesv_ referen ced in function lapack_lite_dgesv lapack_litemodule.obj : error LNK2019: unresolved external symbol dgesdd_ refere nced in function lapack_lite_dgesdd lapack_litemodule.obj : error LNK2019: unresolved external symbol dgetrf_ refere nced in function lapack_lite_dgetrf lapack_litemodule.obj : error LNK2019: unresolved external symbol dpotrf_ refere nced in function lapack_lite_dpotrf lapack_litemodule.obj : error LNK2019: unresolved external symbol dgeqrf_ refere nced in function lapack_lite_dgeqrf lapack_litemodule.obj : error LNK2019: unresolved external symbol dorgqr_ refere nced in function lapack_lite_dorgqr lapack_litemodule.obj : error LNK2019: unresolved external symbol zgeev_ referen ced in function lapack_lite_zgeev lapack_litemodule.obj : error LNK2019: unresolved external symbol zgelsd_ refere nced in function lapack_lite_zgelsd lapack_litemodule.obj : error LNK2019: unresolved external symbol zgesv_ referen ced in function lapack_lite_zgesv lapack_litemodule.obj : error LNK2019: unresolved external symbol zgesdd_ refere nced in function lapack_lite_zgesdd lapack_litemodule.obj : error LNK2019: unresolved external symbol zgetrf_ refere nced in function lapack_lite_zgetrf lapack_litemodule.obj : error LNK2019: unresolved external symbol zpotrf_ refere nced in function lapack_lite_zpotrf lapack_litemodule.obj : error LNK2019: unresolved external symbol zgeqrf_ refere nced in function lapack_lite_zgeqrf lapack_litemodule.obj : error LNK2019: unresolved external symbol zungqr_ refere nced in function lapack_lite_zungqr build\lib.win-amd64-2.6\numpy\linalg\lapack_lite.pyd : fatal error LNK1120: 18 u nresolved externals error: Command "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\amd64\ link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:"C:\Program Files (x86)\Universit y Of Tennessee\LAPACK 3.1.1\lib\x64" /LIBPATH:C:\Python26\libs /LIBPATH:C:\Pytho n26\PCbuild\amd64 LAPACKd.lib BLASd.lib /EXPORT:initlapack_lite build\temp.win-a md64-2.6\Release\numpy\linalg\lapack_litemodule.obj /OUT:build\lib.win-amd64-2.6 \numpy\linalg\lapack_lite.pyd /IMPLIB:build\temp.win-amd64-2.6\Release\numpy\lin alg\lapack_lite.lib /MANIFESTFILE:build\temp.win-amd64-2.6\Release\numpy\linalg\ lapack_lite.pyd.manifest" failed with exit status 1120 I appreciate any help or ideas anyone has, Cheers, Hanni -------------- next part -------------- An HTML attachment was scrubbed... URL: From barthelemy at crans.org Fri Nov 21 04:57:23 2008 From: barthelemy at crans.org (=?ISO-8859-1?Q?S=E9bastien_Barth=E9lemy?=) Date: Fri, 21 Nov 2008 10:57:23 +0100 Subject: [Numpy-discussion] working on multiple matrices of the same shape Message-ID: <78f7ab620811210157l15e14c76r85c7b23a6a010a89@mail.gmail.com> Hello, I would like to port a matlab library which provides functions for rigid body mechanics such as operations on homogeneous matrices (in SE(3)), twists (in se(3)) and so. In matlab, the library worked on 3d matrices: n homoneous matrices were stacked along the 3d dimension. This speeded up computations on multiple matrices, at the price that native matrix opertators such H1 * H2 did not work (it is not defined for 3d matrices). In this spirit, in numpy a set of rotation matrices could be built in the following way: def rotx(theta): """ SE(3) matrices corresponding to a rotation around x-axis. Theta is a 1-d array """ costheta = np.cos(theta) sintheta = np.sin(theta) H = np.zeros((theta.size,4,4)) H[:,0,0] = 1 H[:,3,3] = 1 H[:,1,1] = costheta H[:,2,2] = costheta H[:,2,1] = sintheta H[:,1,2] = sintheta return H I'm now seeking advices regarding an implementation with numpy (it's my first time with numpy). - Is there any difference between a 3d array and a 1-d array of 2-d arrays (seems not) - would you use a 3d-array or a list of 2d arrays or anything else to store these matrices ? - Is there a way to work easily on multiple matrices of the same shape ? (in single-instruction/multiple-data spirit) for instance if A.shape==(3,2,4) and B.shape=(3,4,1), what is the best way to compute C for i = (0,1,2) C[i,:,:] = np.dot(A[i,:,:],B[i,:,:]) - is it a good idea/not to hard to subclass ndarray with a HomogenousMatrix class ? (one could then redefine inv() for instance) - Is my implementation of rotx efficient ? Thank you for any help, and please feel free to send me back reading anything I missed. regards -- S?bastien Barth?lemy From charlesr.harris at gmail.com Fri Nov 21 11:20:36 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 21 Nov 2008 09:20:36 -0700 Subject: [Numpy-discussion] working on multiple matrices of the same shape In-Reply-To: <78f7ab620811210157l15e14c76r85c7b23a6a010a89@mail.gmail.com> References: <78f7ab620811210157l15e14c76r85c7b23a6a010a89@mail.gmail.com> Message-ID: On Fri, Nov 21, 2008 at 2:57 AM, S?bastien Barth?lemy wrote: > Hello, > > I would like to port a matlab library which provides functions for > rigid body mechanics such as operations on homogeneous matrices (in > SE(3)), twists (in se(3)) and so. > > In matlab, the library worked on 3d matrices: n homoneous matrices > were stacked along the 3d dimension. This speeded up computations on > multiple matrices, at the price that native matrix opertators such H1 > * H2 did not work (it is not defined for 3d matrices). > > In this spirit, in numpy a set of rotation matrices could be built in > the following way: > > def rotx(theta): > """ > SE(3) matrices corresponding to a rotation around x-axis. Theta is > a 1-d array > """ > costheta = np.cos(theta) > sintheta = np.sin(theta) > H = np.zeros((theta.size,4,4)) > H[:,0,0] = 1 > H[:,3,3] = 1 > H[:,1,1] = costheta > H[:,2,2] = costheta > H[:,2,1] = sintheta > H[:,1,2] = sintheta > return H > > I'm now seeking advices regarding an implementation with numpy (it's > my first time with numpy). > > - Is there any difference between a 3d array and a 1-d array of 2-d > arrays (seems not) > > - would you use a 3d-array or a list of 2d arrays or anything else to > store these matrices ? > > - Is there a way to work easily on multiple matrices of the same shape > ? (in single-instruction/multiple-data spirit) > for instance if A.shape==(3,2,4) and B.shape=(3,4,1), what is the > best way to compute C > for i = (0,1,2) > C[i,:,:] = np.dot(A[i,:,:],B[i,:,:]) > There is work going on along these lines called generalized ufuncs, but it isn't there yet. The easiest current workaround is to use lists of matrices or arrays. Or you can get clever and do the matrix multiplications yourself using the current numpy operators, i.e. C = (A[:,:,:,newaxis]*B[:,newaxis,:,:]).sum(axis=-2) But this uses more memory than the list of matrices approach. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Fri Nov 21 15:51:44 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 21 Nov 2008 15:51:44 -0500 Subject: [Numpy-discussion] numpy.ma.allclose bug In-Reply-To: References: Message-ID: <9876C1F0-8C5D-4A47-89A8-F47C43F8CBD6@gmail.com> Charles, That should be fixed in r6087. In your example, the last statement outputs False instead of raising an exception. Note the deprecation warning: instead of fill_value, you want to use masked_equal to decide whether missing values should be considered True or False. Let me know how it goes. P. On Nov 20, 2008, at 6:26 PM, Charles ???? Doutriaux wrote: > The following shows a bug in numpy.ma.allclose: > > import numpy > import numpy.ma > > a = numpy.arange(100) > b=numpy.reshape(a,(10,10)) > print b > c=numpy.ma.masked_greater(b,98) > print c.count() > numpy.ma.allclose(b,1) > numpy.ma.allclose(c,1) > > > Since c is masked it fails > > I think it should pass returning either False or True, not sure what > it should return in case all the elements are equal to 1 (except of > course the masked one) > > Note that the following works: > > numpy.ma.allclose(c,numpy.ma.ones(c.shape)) > > So I'm good for now > > > Thanks, > > C> > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From simpson at math.toronto.edu Fri Nov 21 16:35:33 2008 From: simpson at math.toronto.edu (Gideon Simpson) Date: Fri, 21 Nov 2008 16:35:33 -0500 Subject: [Numpy-discussion] MATLAB ASCII format Message-ID: <6EE37430-1440-452A-8E66-1573A9A2D67C@math.toronto.edu> Is there (or should there be) a routine for reading and writing numpy arrays and matrices in MATLAB ASCII m-file format? -gideon From nadavh at visionsense.com Sat Nov 22 00:19:27 2008 From: nadavh at visionsense.com (Nadav Horesh) Date: Sat, 22 Nov 2008 07:19:27 +0200 Subject: [Numpy-discussion] MATLAB ASCII format References: <6EE37430-1440-452A-8E66-1573A9A2D67C@math.toronto.edu> Message-ID: <710F2847B0018641891D9A216027636029C333@ex3.envision.co.il> Look at numpy's savetxt and loadtxt. For more, look at loadmat and savemat in scipy.io Nadav -----????? ??????----- ???: numpy-discussion-bounces at scipy.org ??? Gideon Simpson ????: ? 21-??????-08 23:35 ??: SciPy Users List; Discussion of Numerical Python ????: [Numpy-discussion] MATLAB ASCII format Is there (or should there be) a routine for reading and writing numpy arrays and matrices in MATLAB ASCII m-file format? -gideon _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 2891 bytes Desc: not available URL: From berthe.loic at gmail.com Sat Nov 22 14:19:36 2008 From: berthe.loic at gmail.com (=?ISO-8859-1?Q?Lo=EFc_BERTHE?=) Date: Sat, 22 Nov 2008 20:19:36 +0100 Subject: [Numpy-discussion] Need some explanations on assigning/incrementing values in Numpy Message-ID: I've encoutered an error during an ipython's session that I fail to understand : In [12]: n = 4 In [13]: K = mat(diag(arange(2*n))) In [14]: print K [[0 0 0 0 0 0 0 0] [0 1 0 0 0 0 0 0] [0 0 2 0 0 0 0 0] [0 0 0 3 0 0 0 0] [0 0 0 0 4 0 0 0] [0 0 0 0 0 5 0 0] [0 0 0 0 0 0 6 0] [0 0 0 0 0 0 0 7]] In [15]: o = 2*arange(n) In [16]: kc = 10 + arange(n) In [17]: K[ r_[o-1, o], r_[o, o-1] ] = r_[kc, kc] In [18]: print K [[ 0 0 0 0 0 0 0 10] [ 0 1 11 0 0 0 0 0] [ 0 11 2 0 0 0 0 0] [ 0 0 0 3 12 0 0 0] [ 0 0 0 12 4 0 0 0] [ 0 0 0 0 0 5 13 0] [ 0 0 0 0 0 13 6 0] [10 0 0 0 0 0 0 7]] In [19]: K[ r_[o-1, o], r_[o, o-1]] += 10*r_[kc, kc] --------------------------------------------------------------------------- Traceback (most recent call last) /home/loic/Python/numpy/ in () : array is not broadcastable to correct shape In [20]: print K[ r_[o-1, o], r_[o, o-1]].shape, r_[kc, kc].shape (1, 8) (8,) In [21]: print K[ r_[o-1, o], r_[o, o-1]].shape, r_[kc, kc][newaxis, :].shape (1, 8) (1, 8) In [22]: K[ r_[o-1, o], r_[o, o-1]] += 10*r_[kc, kc][newaxis, :] --------------------------------------------------------------------------- Traceback (most recent call last) /home/loic/Python/numpy/ in () : array is not broadcastable to correct shape Could you explain me : - Why do an assignment at line 17 works where an increment raises an error (line 19) ? From robert.kern at gmail.com Sat Nov 22 18:12:43 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 22 Nov 2008 17:12:43 -0600 Subject: [Numpy-discussion] Need some explanations on assigning/incrementing values in Numpy In-Reply-To: References: Message-ID: <3d375d730811221512u64ff5f0by9c91d10084db24ae@mail.gmail.com> On Sat, Nov 22, 2008 at 13:19, Lo?c BERTHE wrote: > I've encoutered an error during an ipython's session that I fail to understand : > > In [12]: n = 4 > In [13]: K = mat(diag(arange(2*n))) > In [14]: print K > [[0 0 0 0 0 0 0 0] > [0 1 0 0 0 0 0 0] > [0 0 2 0 0 0 0 0] > [0 0 0 3 0 0 0 0] > [0 0 0 0 4 0 0 0] > [0 0 0 0 0 5 0 0] > [0 0 0 0 0 0 6 0] > [0 0 0 0 0 0 0 7]] > > In [15]: o = 2*arange(n) > In [16]: kc = 10 + arange(n) > In [17]: K[ r_[o-1, o], r_[o, o-1] ] = r_[kc, kc] > In [18]: print K > [[ 0 0 0 0 0 0 0 10] > [ 0 1 11 0 0 0 0 0] > [ 0 11 2 0 0 0 0 0] > [ 0 0 0 3 12 0 0 0] > [ 0 0 0 12 4 0 0 0] > [ 0 0 0 0 0 5 13 0] > [ 0 0 0 0 0 13 6 0] > [10 0 0 0 0 0 0 7]] > > > In [19]: K[ r_[o-1, o], r_[o, o-1]] += 10*r_[kc, kc] > --------------------------------------------------------------------------- > Traceback (most recent call last) > /home/loic/Python/numpy/ in () > : array is not broadcastable to correct shape > > > In [20]: print K[ r_[o-1, o], r_[o, o-1]].shape, r_[kc, kc].shape > (1, 8) (8,) > > In [21]: print K[ r_[o-1, o], r_[o, o-1]].shape, r_[kc, kc][newaxis, :].shape > (1, 8) (1, 8) > > In [22]: K[ r_[o-1, o], r_[o, o-1]] += 10*r_[kc, kc][newaxis, :] > --------------------------------------------------------------------------- > Traceback (most recent call last) > /home/loic/Python/numpy/ in () > : array is not broadcastable to correct shape > > Could you explain me : > - Why do an assignment at line 17 works where an increment raises an > error (line 19) ? matrix objects are a bit weird. Most operations on them always return a 2D matrix, even if the same operation on a regular ndarray would return a 1D array. In the assignment, no object is actually created by indexing K, so this coercion never happens, and the effect is the same as if you were using ndarrays. With +=, on the other hand, indexing K does return something that gets coerced to a 2D matrix. I'm not entirely sure where the ValueError is getting raised, but I suspect it happens when the result is getting assigned back into K. Your code works fine if you just use regular ndarray objects. I highly recommend just using ndarrays, especially if you are going to be doing advanced indexing like this. The "always return a 2D matrix" semantics really get in the way for such things. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Sat Nov 22 19:17:26 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Sat, 22 Nov 2008 19:17:26 -0500 Subject: [Numpy-discussion] numpy.ma.mod missing In-Reply-To: <1BA1100E-E1F9-4574-AF7F-C05E30EBEDB9@llnl.gov> References: <1BA1100E-E1F9-4574-AF7F-C05E30EBEDB9@llnl.gov> Message-ID: <559F3515-EF76-4664-81A2-004ABA1C638E@gmail.com> Added on SVN (r6094 for 13x, r6095 for 12x). Thx for reporting! On Nov 19, 2008, at 6:57 PM, Charles ???? Doutriaux wrote: > Hello, > > Can I request that "mod" be added to numpy.ma ? > > Thx, > > C> > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From charlesr.harris at gmail.com Sun Nov 23 21:58:32 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 23 Nov 2008 19:58:32 -0700 Subject: [Numpy-discussion] Stop generating include files? Message-ID: Hi All, I would like to suggest that it is time to stop generating the include files by scanning the (c) sources. I think this would be a good thing to do for the following reasons. 1. The Numpy API has settled down and we shouldn't be changing is much. 2. It is good practice to separate the declarations and definitions so that they act as checks on one another. 3. It simplifies the build system. Instead of lists of scanned files, lists of API order, and a generator, we simply have *.h files. 4. A simplified build system makes it easier to break up files and generate API functions from code templates. 5. The *.h files will be available in Numpy *before* a build. This should make it easier for newbies to figure out what is going on. I don't suggest we remove the various API tags, they are useful bits of documentation when reading the code. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Mon Nov 24 02:13:50 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 23 Nov 2008 23:13:50 -0800 Subject: [Numpy-discussion] Numpy 1.2.2 ? In-Reply-To: <72F835D2-9BA2-4216-BCF1-34764E35E491@gmail.com> References: <72F835D2-9BA2-4216-BCF1-34764E35E491@gmail.com> Message-ID: On Thu, Nov 20, 2008 at 6:44 AM, Pierre GM wrote: > I've recently introduced some little fixes in the SVN version of > numpy.ma.core > Is there any plan for a 1.2.2 release, or will we directly switch to > 1.3.0 ? Do I need to backport these fixes to 12x ? Yes, please backport your fixes to the 1.2.x branch. I would like to get a 1.2.2 release out at least. I would like to release 1.2.2 before we release 0.7.0 final--in case, we need any bugfixes (and *possibly* minor changes to NumPy testing) in NumPy 1.2.x for SciPy 0.7.x. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From charlesr.harris at gmail.com Mon Nov 24 02:41:34 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 24 Nov 2008 00:41:34 -0700 Subject: [Numpy-discussion] Numpy 1.2.2 ? In-Reply-To: References: <72F835D2-9BA2-4216-BCF1-34764E35E491@gmail.com> Message-ID: On Mon, Nov 24, 2008 at 12:13 AM, Jarrod Millman wrote: > On Thu, Nov 20, 2008 at 6:44 AM, Pierre GM wrote: > > I've recently introduced some little fixes in the SVN version of > > numpy.ma.core > > Is there any plan for a 1.2.2 release, or will we directly switch to > > 1.3.0 ? Do I need to backport these fixes to 12x ? > > Yes, please backport your fixes to the 1.2.x branch. I would like to > get a 1.2.2 release out at least. I would like to release 1.2.2 > before we release 0.7.0 final--in case, we need any bugfixes (and > *possibly* minor changes to NumPy testing) in NumPy 1.2.x for SciPy > 0.7.x. > I'd like to do a 1.1.2 release for the Python 2.3 user(s) to get out some fixes for Python 2.3 that went in after the last release. I don't want to do any more than that, although if something can be copied straight over that might be a go. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Mon Nov 24 02:46:41 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 23 Nov 2008 23:46:41 -0800 Subject: [Numpy-discussion] Numpy 1.2.2 ? In-Reply-To: References: <72F835D2-9BA2-4216-BCF1-34764E35E491@gmail.com> Message-ID: On Sun, Nov 23, 2008 at 11:41 PM, Charles R Harris wrote: > I'd like to do a 1.1.2 release for the Python 2.3 user(s) to get out some > fixes for Python 2.3 that went in after the last release. I don't want to do > any more than that, although if something can be copied straight over that > might be a go. +1. I am happy to help out with this too. How soon do you want to release 1.1.2? I could help later this week. If you can get the branch ready and take care of the release notes, I can take care of everything after that. It would also be great if you could help figure out what needs to be back-ported from the trunk to the 1.2.x branch. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From charlesr.harris at gmail.com Mon Nov 24 03:21:15 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 24 Nov 2008 01:21:15 -0700 Subject: [Numpy-discussion] Numpy 1.2.2 ? In-Reply-To: References: <72F835D2-9BA2-4216-BCF1-34764E35E491@gmail.com> Message-ID: On Mon, Nov 24, 2008 at 12:46 AM, Jarrod Millman wrote: > On Sun, Nov 23, 2008 at 11:41 PM, Charles R Harris > wrote: > > I'd like to do a 1.1.2 release for the Python 2.3 user(s) to get out some > > fixes for Python 2.3 that went in after the last release. I don't want to > do > > any more than that, although if something can be copied straight over > that > > might be a go. > > +1. I am happy to help out with this too. How soon do you want to > release 1.1.2? I could help later this week. If you can get the > branch ready and take care of the release notes, I can take care of > everything after that. It would also be great if you could help > figure out what needs to be back-ported from the trunk to the 1.2.x > branch. > The next two or three weeks are going to be busy ones for me, so probably after that unless things go unusually well. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Mon Nov 24 05:52:53 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 24 Nov 2008 12:52:53 +0200 Subject: [Numpy-discussion] working on multiple matrices of the same shape In-Reply-To: <78f7ab620811210157l15e14c76r85c7b23a6a010a89@mail.gmail.com> References: <78f7ab620811210157l15e14c76r85c7b23a6a010a89@mail.gmail.com> Message-ID: <9457e7c80811240252u48b0481dy194b611fe7865b42@mail.gmail.com> 2008/11/21 S?bastien Barth?lemy : > In this spirit, in numpy a set of rotation matrices could be built in > the following way: > > def rotx(theta): > """ > SE(3) matrices corresponding to a rotation around x-axis. Theta is > a 1-d array > """ > costheta = np.cos(theta) > sintheta = np.sin(theta) > H = np.zeros((theta.size,4,4)) > H[:,0,0] = 1 > H[:,3,3] = 1 > H[:,1,1] = costheta > H[:,2,2] = costheta > H[:,2,1] = sintheta > H[:,1,2] = sintheta > return H Btw, you can just create an array of these elements directly: np.array([[costheta, -sintheta, 0], [sintheta, costheta , 0], [0 , 0 , 1]]) Regards St?fan From millman at berkeley.edu Mon Nov 24 06:34:03 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 24 Nov 2008 03:34:03 -0800 Subject: [Numpy-discussion] status of numpy 1.3.0 Message-ID: Now that scipy 0.7.0b1 has been tagged, I wanted to start planning for the NumPy 1.3.0: http://projects.scipy.org/scipy/numpy/milestone/1.3.0 The original plan was to release 1.3 at the end of November. At this point, we are going to have to push back the release date a bit. I would like to get 1.3 out ASAP, so I would like aim for the third week of December. This is how I see the current development trunk: * 2.6 compatablity (Linux 32- and 64-bit done, Windows 32-bit done, Mac 32-bit done) * Generalized Ufuncs (committed) * Ufunc clean-up (committed) * Refactoring numpy.core math configuration (?? bump to 1.4 ??) * Improvements to build warnings (?? bump to 1.4 ??) * Histogram (committed) * NumPy testing improvements (http://projects.scipy.org/scipy/numpy/ticket/957) * Documentation improvements * MaskedArray improvements * Bugfixes Am I missing anything? Is there anything else that we should get in before releasing 1.3? Does it seem reasonable that we could release 1.3 during the third week of December? Who will have time to work on NumPy for the next month? Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From david at ar.media.kyoto-u.ac.jp Mon Nov 24 07:16:29 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 24 Nov 2008 21:16:29 +0900 Subject: [Numpy-discussion] status of numpy 1.3.0 In-Reply-To: References: Message-ID: <492A9B1D.8020808@ar.media.kyoto-u.ac.jp> Jarrod Millman wrote: > Now that scipy 0.7.0b1 has been tagged, I wanted to start planning for > the NumPy 1.3.0: > http://projects.scipy.org/scipy/numpy/milestone/1.3.0 > > For completeness, we were wondering with Jarrod if the main focus of 1.3 could be python 2.6 compatibility (plus what is already in, of course). This is mainly a concern on Mac OS X and Windows (numpy seems to build and work OK on 2.6 on linux last time I tried). The rationale was that although python 2.6 is not really a must have compared to 2.5 for most usage of numpy/scipy, since people are now directed to python 2.6 on python.org for windows and mac os X installs, getting it working on 2.6. ASAP would avoid too much trouble for newcomers. > The original plan was to release 1.3 at the end of November. At this > point, we are going to have to push back the release date a bit. I > would like to get 1.3 out ASAP, so I would like aim for the third week > of December. > > This is how I see the current development trunk: > * 2.6 compatablity (Linux 32- and 64-bit done, Windows 32-bit done, > Mac 32-bit done) > A small summary of the issues: I don:t know the status on Mac OS X for python 2.6 - last time I tried, after having installed python 2.6, my installation was broken, and numpy had many trouble, but that may well be my own mistake. Since many developers are on Mac OS X, I guess any problem would be quickly fixed. On Windows: numpy.distutils should now build a working numpy with mingw, as long as the official binary for python 2.6 is built (I won:t go into the details, but python 2.6 lacks some useful build info for numpy to be reliably built with mingw for an arbitrary build - I am working on an upstream patch , but this is unlikely to be available before python 2.7/python 3k). Windows 64 is a PITA, because we can:t use any mingw or cygwin-based toolchain (cygwin only supports 32 bits, mingw is experimental for 64 bits, and not even officially a part of the mingw project AFAIK). It also looks like ATLAS cannot be built on 64 bits, too, since it requires cygwin on windows, and ATLAS configuration fails right at the beginning when I tried the 32 bits cywgin. Assuming I am the only one working on this, I don:t see much hope to see more than a simple numpy built with lapack-lite. This could be useful for people who use numpy for matplotlib, for example; not sure if it worths the trouble. > * Refactoring numpy.core math configuration (?? bump to 1.4 ??) > This has been committed already > * Improvements to build warnings (?? bump to 1.4 ??) > Some has been committed as well, but this has no consequence on distutils-based build (the warnings are only emitted with -W, which distutils does not use by default). David From faltet at pytables.org Mon Nov 24 08:57:32 2008 From: faltet at pytables.org (Francesc Alted) Date: Mon, 24 Nov 2008 14:57:32 +0100 Subject: [Numpy-discussion] status of numpy 1.3.0 In-Reply-To: <492A9B1D.8020808@ar.media.kyoto-u.ac.jp> References: <492A9B1D.8020808@ar.media.kyoto-u.ac.jp> Message-ID: <200811241457.32833.faltet@pytables.org> A Monday 24 November 2008, David Cournapeau escrigu?: > Jarrod Millman wrote: > > Now that scipy 0.7.0b1 has been tagged, I wanted to start planning > > for the NumPy 1.3.0: > > http://projects.scipy.org/scipy/numpy/milestone/1.3.0 > > For completeness, we were wondering with Jarrod if the main focus of > 1.3 could be python 2.6 compatibility (plus what is already in, of > course). This is mainly a concern on Mac OS X and Windows (numpy > seems to build and work OK on 2.6 on linux last time I tried). > > The rationale was that although python 2.6 is not really a must have > compared to 2.5 for most usage of numpy/scipy, since people are now > directed to python 2.6 on python.org for windows and mac os X > installs, getting it working on 2.6. ASAP would avoid too much > trouble for newcomers. Not only newcomers, we all know people that likes to have the latest software in their machines even, as you said, 2.6 is not a big step forward in terms of new functionality. So +1 for having support for 2.6 ASAP. Cheers, -- Francesc Alted From Jim.Vickroy at noaa.gov Mon Nov 24 10:32:39 2008 From: Jim.Vickroy at noaa.gov (Jim Vickroy) Date: Mon, 24 Nov 2008 08:32:39 -0700 Subject: [Numpy-discussion] PIL.Image.fromarray bug in numpy interface Message-ID: <492AC917.9020405@noaa.gov> Hello, While using the PIL interface to numpy, I rediscovered a logic error in the PIL.Image.fromarray() procedure. The problem (and a solution) was mentioned earlier at: * http://projects.scipy.org/pipermail/numpy-discussion/2006-December/024903.html There does not seem to be a formal way to report errors to the PIL project, and I was told that the PIL/numpy interface was contributed by the numpy developers so I'm reporting it here. Please let me know if there is something additional I should do. Thanks, -- jv P.S. FWIW, I'm using: * Python version: 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] * numpy version: 1.2.1 * PIL version: 1.1.6 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Mon Nov 24 10:39:31 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 24 Nov 2008 10:39:31 -0500 Subject: [Numpy-discussion] status of numpy 1.3.0 In-Reply-To: <200811241457.32833.faltet@pytables.org> References: <492A9B1D.8020808@ar.media.kyoto-u.ac.jp> <200811241457.32833.faltet@pytables.org> Message-ID: Well, talking about support to 2.6: When using explicit outputs for some functions (eg, ma.max, ma.min...), a value that should be masked is transformed into np.nan when the explicit output is not a ma.MaskedArray. That worked great in 2.5, with np.nan automatically transformed when the explicit output had a int dtype. With 2.6, a ValueError is raised instead, as np.nan can no longer be casted to int. What should be the recommended behavior ? Raise a ValueError or some other exception, or silently replace np.nan by some value acceptable by int dtype (0, or something else) ? On Nov 24, 2008, at 8:57 AM, Francesc Alted wrote: > A Monday 24 November 2008, David Cournapeau escrigu?: >> Jarrod Millman wrote: >>> Now that scipy 0.7.0b1 has been tagged, I wanted to start planning >>> for the NumPy 1.3.0: >>> http://projects.scipy.org/scipy/numpy/milestone/1.3.0 >> >> For completeness, we were wondering with Jarrod if the main focus of >> 1.3 could be python 2.6 compatibility (plus what is already in, of >> course). This is mainly a concern on Mac OS X and Windows (numpy >> seems to build and work OK on 2.6 on linux last time I tried). >> >> The rationale was that although python 2.6 is not really a must have >> compared to 2.5 for most usage of numpy/scipy, since people are now >> directed to python 2.6 on python.org for windows and mac os X >> installs, getting it working on 2.6. ASAP would avoid too much >> trouble for newcomers. > > Not only newcomers, we all know people that likes to have the latest > software in their machines even, as you said, 2.6 is not a big step > forward in terms of new functionality. So +1 for having support for > 2.6 ASAP. > > Cheers, > > -- > Francesc Alted > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From charlesr.harris at gmail.com Mon Nov 24 11:01:12 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 24 Nov 2008 09:01:12 -0700 Subject: [Numpy-discussion] status of numpy 1.3.0 In-Reply-To: References: Message-ID: On Mon, Nov 24, 2008 at 4:34 AM, Jarrod Millman wrote: > Now that scipy 0.7.0b1 has been tagged, I wanted to start planning for > the NumPy 1.3.0: > http://projects.scipy.org/scipy/numpy/milestone/1.3.0 > > The original plan was to release 1.3 at the end of November. At this > point, we are going to have to push back the release date a bit. I > would like to get 1.3 out ASAP, so I would like aim for the third week > of December. > > This is how I see the current development trunk: > * 2.6 compatablity (Linux 32- and 64-bit done, Windows 32-bit done, > Mac 32-bit done) > * Generalized Ufuncs (committed) The generalized ufunc tests have a blas linkage problem at the moment. I've commented out the call/build code to work on other things but it need to be fixed for release. The problem is finding the proper name in the library to link to and is likely one of those _ fortran thingies. I think the fix can probably be found in the the lapack_lite module. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.wright at resolversystems.com Mon Nov 24 10:44:11 2008 From: tom.wright at resolversystems.com (Tom Wright) Date: Mon, 24 Nov 2008 15:44:11 +0000 Subject: [Numpy-discussion] Old-style classes in tests Message-ID: <492ACBCB.4060204@resolversystems.com> I am currently working on the Ironclad project porting numpy to Ironpython. It would be quite useful for me if HermitianTestCase in test_linalg.py was a new style-class instead of an old-style class - since Ironpython has a bug where dir operations do not work for classes inheriting from both old- and new- style classes and I'd very much prefer not to patch my version of numpy. In general, it would be useful if whenever this multiple inheritence pattern is used new-style classes are used rather than old style classes. This would require the following classes to change: test_numerictypes - create_values, read_values_plain, read_values_nested test_print - create_zeros, create_values, assign_values, byteorder_values test_io - Roundtriptest test_linalg - LinalgTestCase, HermitianTestCase Would people be ameniable to these change? Thank you very much for your help, Tom Wright From barthelemy at crans.org Mon Nov 24 11:27:03 2008 From: barthelemy at crans.org (=?ISO-8859-1?Q?S=E9bastien_Barth=E9lemy?=) Date: Mon, 24 Nov 2008 17:27:03 +0100 Subject: [Numpy-discussion] working on multiple matrices of the same shape In-Reply-To: <9457e7c80811240252u48b0481dy194b611fe7865b42@mail.gmail.com> References: <78f7ab620811210157l15e14c76r85c7b23a6a010a89@mail.gmail.com> <9457e7c80811240252u48b0481dy194b611fe7865b42@mail.gmail.com> Message-ID: <78f7ab620811240827h69beb75cub17a1d38a1e6695@mail.gmail.com> >> def rotx(theta): >> """ >> SE(3) matrices corresponding to a rotation around x-axis. Theta is >> a 1-d array >> """ >> costheta = np.cos(theta) >> sintheta = np.sin(theta) >> H = np.zeros((theta.size,4,4)) >> H[:,0,0] = 1 >> H[:,3,3] = 1 >> H[:,1,1] = costheta >> H[:,2,2] = costheta >> H[:,2,1] = sintheta >> H[:,1,2] = sintheta >> return H > > Btw, you can just create an array of these elements directly: > > np.array([[costheta, -sintheta, 0], > [sintheta, costheta , 0], > [0 , 0 , 1]]) Are you sure ? Here it reports ValueError: setting an array element with a sequence. probably because theta, sintheta and costheta are 1-d arrays of n>1 elements. However, I'll drop them and use list of matrices as Charles suggested. Thanks to both if you. From oliphant at enthought.com Mon Nov 24 11:31:43 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Mon, 24 Nov 2008 10:31:43 -0600 Subject: [Numpy-discussion] PIL.Image.fromarray bug in numpy interface In-Reply-To: <492AC917.9020405@noaa.gov> References: <492AC917.9020405@noaa.gov> Message-ID: <492AD6EF.6030100@enthought.com> Jim Vickroy wrote: > Hello, > > While using the PIL interface to numpy, I rediscovered a logic error > in the PIL.Image.fromarray() procedure. The problem (and a solution) > was mentioned earlier at: > > * http://projects.scipy.org/pipermail/numpy-discussion/2006-December/024903.html > > There does not seem to be a formal way to report errors to the PIL > project, and I was told that the PIL/numpy interface was contributed > by the numpy developers so I'm reporting it here. > > Please let me know if there is something additional I should do. I would suggest making a patch and submitting it to the PIL. -Travis From oliphant at enthought.com Mon Nov 24 11:32:39 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Mon, 24 Nov 2008 10:32:39 -0600 Subject: [Numpy-discussion] Old-style classes in tests In-Reply-To: <492ACBCB.4060204@resolversystems.com> References: <492ACBCB.4060204@resolversystems.com> Message-ID: <492AD727.9010707@enthought.com> Tom Wright wrote: > I am currently working on the Ironclad project porting numpy to Ironpython. > > It would be quite useful for me if HermitianTestCase in test_linalg.py > was a new style-class instead of an old-style class - since Ironpython > has a bug where dir operations do not work for classes inheriting from > both old- and new- style classes and I'd very much prefer not to patch > my version of numpy. > > In general, it would be useful if whenever this multiple inheritence > pattern is used new-style classes are used rather than old style > classes. This would require the following classes to change: > test_numerictypes - create_values, read_values_plain, read_values_nested > test_print - create_zeros, create_values, assign_values, byteorder_values > test_io - Roundtriptest > test_linalg - LinalgTestCase, HermitianTestCase > I have no trouble making all classes new-style. +1 -Travis From doutriaux1 at llnl.gov Mon Nov 24 11:40:29 2008 From: doutriaux1 at llnl.gov (=?UTF-8?Q?Charles_=D8=B3=D9=85=D9=8A=D8=B1_Doutriaux?=) Date: Mon, 24 Nov 2008 08:40:29 -0800 Subject: [Numpy-discussion] numpy.ma.sort failing with bus error Message-ID: Hello, Using numpy 1.2.1 on a mac os 10.5 I admit the user was sort of stretching the limits but (on his machine) import numpy a=numpy.ones((16800,60,96),'f') numpy.sort(a,axis=0) works import numpy.ma a=numpy.ma.sort((16800,60,96),'f') numpy.ma.sort(a,axis=0) failed with some malloc error: python(435) malloc: *** mmap(size=2097152) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug Bus error Since there's no mask I don't really see how much more memory it's using. Beside changing 16800 to 15800 still fails (and now that should be using much less memory) Anyhow I would expect i nicer error than a bus error :) Thx, C> From doutriaux1 at llnl.gov Mon Nov 24 12:03:25 2008 From: doutriaux1 at llnl.gov (=?UTF-8?Q?Charles_=D8=B3=D9=85=D9=8A=D8=B1_Doutriaux?=) Date: Mon, 24 Nov 2008 09:03:25 -0800 Subject: [Numpy-discussion] numpy.ma.sort failing with bus error In-Reply-To: References: Message-ID: i mistyped the second line of the sample failing script it should obviously read: a=numpy.ma.ones((16800,60,96),'f') not numpy.ma.sort((16800,60,96),'f') C. On Nov 24, 2008, at 8:40 AM, Charles ???? Doutriaux wrote: > Hello, > > Using numpy 1.2.1 on a mac os 10.5 > > > I admit the user was sort of stretching the limits but (on his > machine) > > import numpy > a=numpy.ones((16800,60,96),'f') > numpy.sort(a,axis=0) > > works > > import numpy.ma > a=numpy.ma.sort((16800,60,96),'f') > numpy.ma.sort(a,axis=0) > > failed with some malloc error: > python(435) malloc: *** mmap(size=2097152) failed (error code=12) > *** error: can't allocate region > *** set a breakpoint in malloc_error_break to debug > Bus error > > Since there's no mask I don't really see how much more memory it's > using. Beside changing 16800 to 15800 still fails (and now that should > be using much less memory) > > Anyhow I would expect i nicer error than a bus error :) > > Thx, > > C> > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http:// projects.scipy.org/mailman/listinfo/numpy-discussion > From Jim.Vickroy at noaa.gov Mon Nov 24 12:26:28 2008 From: Jim.Vickroy at noaa.gov (Jim Vickroy) Date: Mon, 24 Nov 2008 10:26:28 -0700 Subject: [Numpy-discussion] PIL.Image.fromarray bug in numpy interface In-Reply-To: <492AD6EF.6030100@enthought.com> References: <492AC917.9020405@noaa.gov> <492AD6EF.6030100@enthought.com> Message-ID: <492AE3C4.5010301@noaa.gov> Travis E. Oliphant wrote: > Jim Vickroy wrote: > >> Hello, >> >> While using the PIL interface to numpy, I rediscovered a logic error >> in the PIL.Image.fromarray() procedure. The problem (and a solution) >> was mentioned earlier at: >> >> * http://projects.scipy.org/pipermail/numpy-discussion/2006-December/024903.html >> >> There does not seem to be a formal way to report errors to the PIL >> project, and I was told that the PIL/numpy interface was contributed >> by the numpy developers so I'm reporting it here. >> >> Please let me know if there is something additional I should do. >> > I would suggest making a patch and submitting it to the PIL. > I did post a suggested patch (one-liner) to the PIL group, but a couple of PIL respondents suggested that I submit the problem here. I have followed the PIL group postings for some time, and my impression is that it is somewhat difficult to get reader-submitted patches acknowledged. > -Travis > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Nov 24 12:37:39 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Nov 2008 11:37:39 -0600 Subject: [Numpy-discussion] PIL.Image.fromarray bug in numpy interface In-Reply-To: <492AE3C4.5010301@noaa.gov> References: <492AC917.9020405@noaa.gov> <492AD6EF.6030100@enthought.com> <492AE3C4.5010301@noaa.gov> Message-ID: <3d375d730811240937i584c9d4dhfbe8b24f5bd22c5@mail.gmail.com> On Mon, Nov 24, 2008 at 11:26, Jim Vickroy wrote: > Travis E. Oliphant wrote: > > Jim Vickroy wrote: > > > Hello, > > While using the PIL interface to numpy, I rediscovered a logic error > in the PIL.Image.fromarray() procedure. The problem (and a solution) > was mentioned earlier at: > > * > http://projects.scipy.org/pipermail/numpy-discussion/2006-December/024903.html > > There does not seem to be a formal way to report errors to the PIL > project, and I was told that the PIL/numpy interface was contributed > by the numpy developers so I'm reporting it here. > > Please let me know if there is something additional I should do. > > > I would suggest making a patch and submitting it to the PIL. > > > I did post a suggested patch (one-liner) to the PIL group, but a couple of > PIL respondents suggested that I submit the problem here. I have followed > the PIL group postings for some time, and my impression is that it is > somewhat difficult to get reader-submitted patches acknowledged. Tell them that we approve of the change. We don't have commit access to PIL, so I believe that our approval is the only reason they could possibly send you over here. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Jim.Vickroy at noaa.gov Mon Nov 24 12:46:56 2008 From: Jim.Vickroy at noaa.gov (Jim Vickroy) Date: Mon, 24 Nov 2008 10:46:56 -0700 Subject: [Numpy-discussion] PIL.Image.fromarray bug in numpy interface In-Reply-To: <3d375d730811240937i584c9d4dhfbe8b24f5bd22c5@mail.gmail.com> References: <492AC917.9020405@noaa.gov> <492AD6EF.6030100@enthought.com> <492AE3C4.5010301@noaa.gov> <3d375d730811240937i584c9d4dhfbe8b24f5bd22c5@mail.gmail.com> Message-ID: <492AE890.60102@noaa.gov> Robert Kern wrote: > On Mon, Nov 24, 2008 at 11:26, Jim Vickroy wrote: > >> Travis E. Oliphant wrote: >> >> Jim Vickroy wrote: >> >> >> Hello, >> >> While using the PIL interface to numpy, I rediscovered a logic error >> in the PIL.Image.fromarray() procedure. The problem (and a solution) >> was mentioned earlier at: >> >> * >> http://projects.scipy.org/pipermail/numpy-discussion/2006-December/024903.html >> >> There does not seem to be a formal way to report errors to the PIL >> project, and I was told that the PIL/numpy interface was contributed >> by the numpy developers so I'm reporting it here. >> >> Please let me know if there is something additional I should do. >> >> >> I would suggest making a patch and submitting it to the PIL. >> >> >> I did post a suggested patch (one-liner) to the PIL group, but a couple of >> PIL respondents suggested that I submit the problem here. I have followed >> the PIL group postings for some time, and my impression is that it is >> somewhat difficult to get reader-submitted patches acknowledged. >> > > Tell them that we approve of the change. We don't have commit access > to PIL, so I believe that our approval is the only reason they could > possibly send you over here. > > Thank-you, Robert. I will do so. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Mon Nov 24 12:51:42 2008 From: Chris.Barker at noaa.gov (Chris Barker) Date: Mon, 24 Nov 2008 12:51:42 -0500 Subject: [Numpy-discussion] status of numpy 1.3.0 In-Reply-To: <492A9B1D.8020808@ar.media.kyoto-u.ac.jp> References: <492A9B1D.8020808@ar.media.kyoto-u.ac.jp> Message-ID: <492AE9AE.4090007@noaa.gov> David Cournapeau wrote: > Windows 64 is a PITA, ... > I don:t see much hope to see more than a simple numpy built with > lapack-lite. This could be useful for people who use numpy for > matplotlib, for example; not sure if it worths the trouble. I think there is a great deal of use for it beyond simple MPL use -- I hardly ever use the LAPACK stuff. Granted, I don't use Win64 either... -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Mon Nov 24 12:57:03 2008 From: Chris.Barker at noaa.gov (Chris Barker) Date: Mon, 24 Nov 2008 12:57:03 -0500 Subject: [Numpy-discussion] PIL.Image.fromarray bug in numpy interface In-Reply-To: <3d375d730811240937i584c9d4dhfbe8b24f5bd22c5@mail.gmail.com> References: <492AC917.9020405@noaa.gov> <492AD6EF.6030100@enthought.com> <492AE3C4.5010301@noaa.gov> <3d375d730811240937i584c9d4dhfbe8b24f5bd22c5@mail.gmail.com> Message-ID: <492AEAEF.9040405@noaa.gov> Robert Kern wrote: >> Jim Vickroy wrote: >> While using the PIL interface to numpy, I rediscovered a logic error >> in the PIL.Image.fromarray() procedure. The problem (and a solution) >> was mentioned earlier at: > Tell them that we approve of the change. We don't have commit access > to PIL, so I believe that our approval is the only reason they could > possibly send you over here. Just for the record, it was me that "sent him over here". I thought it would be good for a numpy dev to check out the patch for correctnesses -- it looked like a numpy API issue, and I figured Fredrik wouldn't want to look too hard at it to determine if it was correct. so if you "approval" means you've looked at the fix and think it's correct, great! Now we just have to hope Fredrik takes an interest... -thanks, -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From faltet at pytables.org Mon Nov 24 13:34:24 2008 From: faltet at pytables.org (Francesc Alted) Date: Mon, 24 Nov 2008 19:34:24 +0100 Subject: [Numpy-discussion] status of numpy 1.3.0 In-Reply-To: References: Message-ID: <200811241934.25438.faltet@pytables.org> A Monday 24 November 2008, Jarrod Millman escrigu?: > Now that scipy 0.7.0b1 has been tagged, I wanted to start planning > for the NumPy 1.3.0: > http://projects.scipy.org/scipy/numpy/milestone/1.3.0 > > The original plan was to release 1.3 at the end of November. At this > point, we are going to have to push back the release date a bit. I > would like to get 1.3 out ASAP, so I would like aim for the third > week of December. > > This is how I see the current development trunk: > * 2.6 compatablity (Linux 32- and 64-bit done, Windows 32-bit done, > Mac 32-bit done) > * Generalized Ufuncs (committed) > * Ufunc clean-up (committed) > * Refactoring numpy.core math configuration (?? bump to 1.4 ??) > * Improvements to build warnings (?? bump to 1.4 ??) > * Histogram (committed) > * NumPy testing improvements > (http://projects.scipy.org/scipy/numpy/ticket/957) > * Documentation improvements > * MaskedArray improvements > * Bugfixes > > Am I missing anything? Is there anything else that we should get in > before releasing 1.3? Does it seem reasonable that we could release > 1.3 during the third week of December? Who will have time to work on > NumPy for the next month? Just went ahead and compiled the NumPy trunk version in a Windows platform, and although most of the nose tests of NumPy passes well (there are some failures, but seem harmless), my tests say there is an inconsistency in the positive limit value (+1.) of arctanh between 1.2.x and 1.3.x in trunk: Python 2.4.4 (#71, Oct 18 2006, 08:34:43) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '1.2.1' >>> numpy.arctanh(1.) 1.#INF >>> numpy.isinf >>> numpy.isinf(numpy.arctanh(1.)) True >>> numpy.arctanh(-1.) -1.#INF >>> numpy.isinf(numpy.arctanh(-1.)) True Python 2.6 (r26:66721, Oct 2 2008, 11:35:03) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '1.3.0.dev6085' >>> numpy.arctanh(1.) nan >>> numpy.isinf(numpy.arctanh(1.)) False >>> numpy.arctanh(-1.) -inf >>> numpy.isinf(numpy.arctanh(-1.)) True As you see, the trunk version returns ``nan`` for arctanh(1.), while 1.2.1 returns ``inf`` (the correct value). For arctanh(-1.) both versions correctly returns ``-inf``. I used the official binaries for 1.2.1, while I've used the MSVC 2008 (32-bit) for compiling trunk (the resuilting binaries works badly in both Windows XP 32-bit and Windows Vista 64-bit). My experiments on Linux shows that they both return ``+inf`` and ``-inf``, so it seems that this is a Windows specific issue. Should I file a ticket for this? Cheers, -- Francesc Alted From faltet at pytables.org Mon Nov 24 13:45:56 2008 From: faltet at pytables.org (Francesc Alted) Date: Mon, 24 Nov 2008 19:45:56 +0100 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions Message-ID: <200811241945.56313.faltet@pytables.org> Hi, After dealing with another issue, I realized that the names of inverse trigonometrical/hyperbolic functions in NumPy don't follow the main standards in computer science. For example, where Python writes: asin, acos, atan, asinh, acosh, atanh NumPy choose: arcsin, arccos, arctan, arcsinh, arccosh, arctanh And not only Python, the former also seems to be the standard in computer science. Quoting: http://en.wikipedia.org/wiki/Inverse_hyperbolic_function """ The usual abbreviations for them in mathematics are arsinh, arcsinh (in the USA) or asinh (in computer science). ... The acronyms arcsinh, arccosh etc. are commonly used, even though they are misnomers, since the prefix arc is the abbreviation for arcus, while the prefix ar stands for area. """ So, IMHO, I think it would be better to rename the inverse trigonometric functions from ``arc*`` to ``a*`` prefix. Of course, in order to do that correctly, one should add the new names and add a ``DeprecationWarning`` informing that people should start to use the new names. After two or three NumPy versions, the old function names can be removed safely. What people think? -- Francesc Alted From Chris.Barker at noaa.gov Mon Nov 24 14:01:41 2008 From: Chris.Barker at noaa.gov (Chris Barker) Date: Mon, 24 Nov 2008 14:01:41 -0500 Subject: [Numpy-discussion] Need some explanations on assigning/incrementing values in Numpy In-Reply-To: <3d375d730811221512u64ff5f0by9c91d10084db24ae@mail.gmail.com> References: <3d375d730811221512u64ff5f0by9c91d10084db24ae@mail.gmail.com> Message-ID: <492AFA15.7020105@noaa.gov> Robert Kern wrote: > matrix objects are a bit weird. Most operations on them always return > a 2D matrix, even if the same operation on a regular ndarray would > return a 1D array. Whatever happened to the proposals to improve this? I think there were some good ideas floated. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From efiring at hawaii.edu Mon Nov 24 14:31:28 2008 From: efiring at hawaii.edu (Eric Firing) Date: Mon, 24 Nov 2008 09:31:28 -1000 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: <200811241945.56313.faltet@pytables.org> References: <200811241945.56313.faltet@pytables.org> Message-ID: <492B0110.40606@hawaii.edu> Francesc Alted wrote: > Hi, > > After dealing with another issue, I realized that the names of inverse > trigonometrical/hyperbolic functions in NumPy don't follow the main > standards in computer science. For example, where Python writes: > > asin, acos, atan, asinh, acosh, atanh > > NumPy choose: > > arcsin, arccos, arctan, arcsinh, arccosh, arctanh > > And not only Python, the former also seems to be the standard in > computer science. Quoting: > > http://en.wikipedia.org/wiki/Inverse_hyperbolic_function > > """ > The usual abbreviations for them in mathematics are arsinh, arcsinh (in > the USA) or asinh (in computer science). > ... > The acronyms arcsinh, arccosh etc. are commonly used, even though they > are misnomers, since the prefix arc is the abbreviation for arcus, > while the prefix ar stands for area. > """ > > So, IMHO, I think it would be better to rename the inverse trigonometric > functions from ``arc*`` to ``a*`` prefix. Of course, in order to do > that correctly, one should add the new names and add a > ``DeprecationWarning`` informing that people should start to use the > new names. After two or three NumPy versions, the old function names > can be removed safely. > > What people think? > +1 I have stumbled over this myself. If there is resistance to removing the old names, then just leave them as synonyms; but definitely numpy should have asin etc. Eric From ggellner at uoguelph.ca Mon Nov 24 14:35:53 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Mon, 24 Nov 2008 14:35:53 -0500 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: <200811241945.56313.faltet@pytables.org> References: <200811241945.56313.faltet@pytables.org> Message-ID: <20081124193553.GA23065@encolpuis> On Mon, Nov 24, 2008 at 07:45:56PM +0100, Francesc Alted wrote: > Hi, > > After dealing with another issue, I realized that the names of inverse > trigonometrical/hyperbolic functions in NumPy don't follow the main > standards in computer science. For example, where Python writes: > > asin, acos, atan, asinh, acosh, atanh > > NumPy choose: > > arcsin, arccos, arctan, arcsinh, arccosh, arctanh > > And not only Python, the former also seems to be the standard in > computer science. Quoting: > > http://en.wikipedia.org/wiki/Inverse_hyperbolic_function > > """ > The usual abbreviations for them in mathematics are arsinh, arcsinh (in > the USA) or asinh (in computer science). > ... > The acronyms arcsinh, arccosh etc. are commonly used, even though they > are misnomers, since the prefix arc is the abbreviation for arcus, > while the prefix ar stands for area. > """ > > So, IMHO, I think it would be better to rename the inverse trigonometric > functions from ``arc*`` to ``a*`` prefix. Of course, in order to do > that correctly, one should add the new names and add a > ``DeprecationWarning`` informing that people should start to use the > new names. After two or three NumPy versions, the old function names > can be removed safely. > > What people think? > +1 Gabriel From oliphant at enthought.com Mon Nov 24 14:57:11 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Mon, 24 Nov 2008 13:57:11 -0600 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: <200811241945.56313.faltet@pytables.org> References: <200811241945.56313.faltet@pytables.org> Message-ID: <492B0717.1020005@enthought.com> Francesc Alted wrote: > So, IMHO, I think it would be better to rename the inverse trigonometric > functions from ``arc*`` to ``a*`` prefix. Of course, in order to do > that correctly, one should add the new names and add a > ``DeprecationWarning`` informing that people should start to use the > new names. After two or three NumPy versions, the old function names > can be removed safely. > > What people think? > > +1 -Travis From pgmdevlist at gmail.com Mon Nov 24 15:04:11 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 24 Nov 2008 15:04:11 -0500 Subject: [Numpy-discussion] numpy.ma.sort failing with bus error In-Reply-To: References: Message-ID: <4F1FAC15-2369-4E00-B6B0-430E9E763963@gmail.com> Charles, Confirmed on my machine... I gonna have to clean ma.sort, as there are indeed some temporaries that probably don't need to be created. I must warn you however that I won;t have a lot of time to spend on that in the next few days. In any case, of course, I'll keep you posted. Thx for reporting! On Nov 24, 2008, at 12:03 PM, Charles ???? Doutriaux wrote: > i mistyped the second line of the sample failing script > it should obviously read: > a=numpy.ma.ones((16800,60,96),'f') > not numpy.ma.sort((16800,60,96),'f') > > C. > > On Nov 24, 2008, at 8:40 AM, Charles ???? Doutriaux wrote: > >> Hello, >> >> Using numpy 1.2.1 on a mac os 10.5 >> >> >> I admit the user was sort of stretching the limits but (on his >> machine) >> >> import numpy >> a=numpy.ones((16800,60,96),'f') >> numpy.sort(a,axis=0) >> >> works >> >> import numpy.ma >> a=numpy.ma.sort((16800,60,96),'f') >> numpy.ma.sort(a,axis=0) >> >> failed with some malloc error: >> python(435) malloc: *** mmap(size=2097152) failed (error code=12) >> *** error: can't allocate region >> *** set a breakpoint in malloc_error_break to debug >> Bus error >> >> Since there's no mask I don't really see how much more memory it's >> using. Beside changing 16800 to 15800 still fails (and now that >> should >> be using much less memory) >> >> Anyhow I would expect i nicer error than a bus error :) >> >> Thx, >> >> C> >> >> >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http:// projects.scipy.org/mailman/listinfo/numpy-discussion >> > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From wright at esrf.fr Mon Nov 24 15:00:48 2008 From: wright at esrf.fr (Jon Wright) Date: Mon, 24 Nov 2008 21:00:48 +0100 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: <492B0110.40606@hawaii.edu> References: <200811241945.56313.faltet@pytables.org> <492B0110.40606@hawaii.edu> Message-ID: <492B07F0.9070902@esrf.fr> Eric Firing wrote: > Francesc Alted wrote: > >> So, IMHO, I think it would be better to rename the inverse trigonometric >> functions from ``arc*`` to ``a*`` prefix. > > +1 > I have stumbled over this myself. If there is resistance to removing -1 There is resistance. Please don't remove the old names. Also note that your proposed change will alter people's code in subtle, but potentially very "interesting" ways: >>> from math import * >>> from numpy import * >>> type(arcsin(1)) is type(asin(1)) False >>> from numpy import arcsin as transformacion_del_arco_seno >>> arcsin == transformacion_del_arco_seno True asin(1j) raises an exception, arcsin doesn't. They are *different* functions, hence the names. I have the feeling the only times I ever write to this list is to say "please don't change the API". So, here I am again, "please don't change the API". This is a cosmetic change whose only effect seems to be to have everyone change their code, and then support multiple incompatible numpy versions. Thanks, Jon From robert.kern at gmail.com Mon Nov 24 15:32:26 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Nov 2008 14:32:26 -0600 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: <492B07F0.9070902@esrf.fr> References: <200811241945.56313.faltet@pytables.org> <492B0110.40606@hawaii.edu> <492B07F0.9070902@esrf.fr> Message-ID: <3d375d730811241232k7f1f06d1p521d3cfa60607879@mail.gmail.com> On Mon, Nov 24, 2008 at 14:00, Jon Wright wrote: > I have the feeling the only times I ever write to this list is to say > "please don't change the API". So, here I am again, "please don't change > the API". This is a cosmetic change whose only effect seems to be to > have everyone change their code, and then support multiple incompatible > numpy versions. I agree. -1. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ggellner at uoguelph.ca Mon Nov 24 16:22:02 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Mon, 24 Nov 2008 16:22:02 -0500 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: <492B07F0.9070902@esrf.fr> References: <200811241945.56313.faltet@pytables.org> <492B0110.40606@hawaii.edu> <492B07F0.9070902@esrf.fr> Message-ID: <20081124212202.GA23965@encolpuis> > There is resistance. Please don't remove the old names. Also note that > your proposed change will alter people's code in subtle, but potentially > very "interesting" ways: > > >>> from math import * > >>> from numpy import * > >>> type(arcsin(1)) is type(asin(1)) > False > >>> from numpy import arcsin as transformacion_del_arco_seno > >>> arcsin == transformacion_del_arco_seno > True > > asin(1j) raises an exception, arcsin doesn't. They are *different* > functions, hence the names. > Yet: >>> type(np.sin(1)) == type(math.sin(1)) False And they have the same name. Isn't this what name spaces are for? I think it is strange that some of the math functions have the same name, and some don't. I can't see how being different functions justifies this, or we need to rename the normal trig functions. I can see not wanting to break API compatibility but I don't find the `different functions` argument compelling. Gabriel From gael.varoquaux at normalesup.org Mon Nov 24 16:43:21 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 24 Nov 2008 22:43:21 +0100 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: <492B07F0.9070902@esrf.fr> References: <200811241945.56313.faltet@pytables.org> <492B0110.40606@hawaii.edu> <492B07F0.9070902@esrf.fr> Message-ID: <20081124214321.GC22820@phare.normalesup.org> On Mon, Nov 24, 2008 at 09:00:48PM +0100, Jon Wright wrote: > There is resistance. Please don't remove the old names. Also note that > your proposed change will alter people's code in subtle, but potentially > very "interesting" ways: > >>> from math import * > >>> from numpy import * > >>> type(arcsin(1)) is type(asin(1)) > False > >>> from numpy import arcsin as transformacion_del_arco_seno > >>> arcsin == transformacion_del_arco_seno > True "from foo import *" is really bad. I used to think it wasn't that bad, but I came to realize over the years that it did nothing more than cause confusion (like the one above), and that the cost was very small. Maybe it is just that I have lost contact with basic users... > I have the feeling the only times I ever write to this list is to say > "please don't change the API". So, here I am again, "please don't change > the API". I understand your point, and it is very valid. I am +0 on that ("+" because I favor consistency, "0" because as you point out, this is bad). Ga?l From dwf at cs.toronto.edu Mon Nov 24 17:00:47 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 24 Nov 2008 17:00:47 -0500 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: <20081124212202.GA23965@encolpuis> References: <200811241945.56313.faltet@pytables.org> <492B0110.40606@hawaii.edu> <492B07F0.9070902@esrf.fr> <20081124212202.GA23965@encolpuis> Message-ID: On 24-Nov-08, at 4:22 PM, Gabriel Gellner wrote: >> asin(1j) raises an exception, arcsin doesn't. They are *different* >> functions, hence the names. >> > Yet: > >>>> type(np.sin(1)) == type(math.sin(1)) > False In fact, this goes for every single function listed in the math module's docs, except for the somewhat useless pow(). *Every* other function in math has a corresponding numpy ufunc with the exact same name. So, no, I don't think that's a compelling argument either. > And they have the same name. Isn't this what name spaces are for? I > think it is > strange that some of the math functions have the same name, and some > don't. I > can't see how being different functions justifies this, or we need > to rename > the normal trig functions. > > I can see not wanting to break API compatibility but I don't find the > `different functions` argument compelling. +1. Mixing np.foo and math.foo calls is kind of a recipe for disaster in the general case, I would think. David From charlesr.harris at gmail.com Mon Nov 24 17:50:42 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 24 Nov 2008 15:50:42 -0700 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: References: <200811241945.56313.faltet@pytables.org> <492B0110.40606@hawaii.edu> <492B07F0.9070902@esrf.fr> <20081124212202.GA23965@encolpuis> Message-ID: On Mon, Nov 24, 2008 at 3:00 PM, David Warde-Farley wrote: > > On 24-Nov-08, at 4:22 PM, Gabriel Gellner wrote: > > >> asin(1j) raises an exception, arcsin doesn't. They are *different* > >> functions, hence the names. > >> > > Yet: > > > >>>> type(np.sin(1)) == type(math.sin(1)) > > False > > In fact, this goes for every single function listed in the math > module's docs, except for the somewhat useless pow(). *Every* other > function in math has a corresponding numpy ufunc with the exact same > name. So, no, I don't think that's a compelling argument either. > > > And they have the same name. Isn't this what name spaces are for? I > > think it is > > strange that some of the math functions have the same name, and some > > don't. I > > can't see how being different functions justifies this, or we need > > to rename > > the normal trig functions. > > > > I can see not wanting to break API compatibility but I don't find the > > `different functions` argument compelling. > > +1. Mixing np.foo and math.foo calls is kind of a recipe for disaster > in the general case, I would think. > Yes, but many folks will do it for quick and dirty ease. I also don't see why we need to deprecate the old names, just leave them in there. To add new names like this, just copy copy the old ufunc definition in the generator and change "arcsin" to "asin". I think the currently deprecated C-API functions should also remain with their warnings. They only amount to about a hundred lines of code. That said, I'm 0- on this at the moment. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Mon Nov 24 17:55:19 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 24 Nov 2008 14:55:19 -0800 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: <200811241945.56313.faltet@pytables.org> References: <200811241945.56313.faltet@pytables.org> Message-ID: On Mon, Nov 24, 2008 at 10:45 AM, Francesc Alted wrote: > So, IMHO, I think it would be better to rename the inverse trigonometric > functions from ``arc*`` to ``a*`` prefix. Of course, in order to do > that correctly, one should add the new names and add a > ``DeprecationWarning`` informing that people should start to use the > new names. After two or three NumPy versions, the old function names > can be removed safely. > > What people think? +1 It seems there is a fair amount of favor for adding the new names. There is some resistance to removing the old ones. I would be happy to deprecate the old ones, but leave them in until we release a new major release (i.e., NumPy 2.0.0). We could start creating a list of API/ABI clean-ups for whenever we find a compelling reason to release a new major version. In the meantime, we can leave the old names in and just add a deprecation note to the docs. Once we are ready to release 2.0, we can release a 1.x with deprecation warnings. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From charlesr.harris at gmail.com Mon Nov 24 18:09:02 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 24 Nov 2008 16:09:02 -0700 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: References: <200811241945.56313.faltet@pytables.org> Message-ID: On Mon, Nov 24, 2008 at 3:55 PM, Jarrod Millman wrote: > On Mon, Nov 24, 2008 at 10:45 AM, Francesc Alted > wrote: > > So, IMHO, I think it would be better to rename the inverse trigonometric > > functions from ``arc*`` to ``a*`` prefix. Of course, in order to do > > that correctly, one should add the new names and add a > > ``DeprecationWarning`` informing that people should start to use the > > new names. After two or three NumPy versions, the old function names > > can be removed safely. > > > > What people think? > > +1 > It seems there is a fair amount of favor for adding the new names. > There is some resistance to removing the old ones. I would be happy > to deprecate the old ones, but leave them in until we release a new > major release (i.e., NumPy 2.0.0). We could start creating a list of > API/ABI clean-ups for whenever we find a compelling reason to release > a new major version. In the meantime, we can leave the old names in > and just add a deprecation note to the docs. Once we are ready to > release 2.0, we can release a 1.x with deprecation warnings. > This still leaves some incompatibilities; code written with the new functions won't run on older releases of numpy so folks who need portability will have to use the old names. Note that most Linux distros lag a good ways behind the latest and greatest numpy. I say to wait for a major release to add the new names and just leave the old ones alone. This all recalls the hassle of going through all my old code changing from Numeric->Numarray->Numpy. It wasn't difficult but it did consume time. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From discerptor at gmail.com Mon Nov 24 18:10:07 2008 From: discerptor at gmail.com (Joshua Lippai) Date: Mon, 24 Nov 2008 15:10:07 -0800 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: <492B07F0.9070902@esrf.fr> References: <200811241945.56313.faltet@pytables.org> <492B0110.40606@hawaii.edu> <492B07F0.9070902@esrf.fr> Message-ID: <9911419a0811241510u4e350b6br85b783a996559a40@mail.gmail.com> I agree with Jon here. I can see plenty of motivation for adding the names asin, etc., but there really isn't a need to remove the current versions, and it will just introduce compatibility issues when someone tries to run code written with NumPy 1.x using a NumPy 2.x installation for even the simplest scripts. We wouldn't exactly wasting tons of space by keeping both in. So as the suggestion stands, -1. Josh On Mon, Nov 24, 2008 at 12:00 PM, Jon Wright wrote: > Eric Firing wrote: >> Francesc Alted wrote: > This is a cosmetic change whose only effect seems to be to > have everyone change their code, and then support multiple incompatible > numpy versions. > > Thanks, > > Jon > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From charlesr.harris at gmail.com Mon Nov 24 18:13:22 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 24 Nov 2008 16:13:22 -0700 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: References: <200811241945.56313.faltet@pytables.org> Message-ID: On Mon, Nov 24, 2008 at 4:09 PM, Charles R Harris wrote: > > > On Mon, Nov 24, 2008 at 3:55 PM, Jarrod Millman wrote: > >> On Mon, Nov 24, 2008 at 10:45 AM, Francesc Alted >> wrote: >> > So, IMHO, I think it would be better to rename the inverse trigonometric >> > functions from ``arc*`` to ``a*`` prefix. Of course, in order to do >> > that correctly, one should add the new names and add a >> > ``DeprecationWarning`` informing that people should start to use the >> > new names. After two or three NumPy versions, the old function names >> > can be removed safely. >> > >> > What people think? >> >> +1 >> It seems there is a fair amount of favor for adding the new names. >> There is some resistance to removing the old ones. I would be happy >> to deprecate the old ones, but leave them in until we release a new >> major release (i.e., NumPy 2.0.0). We could start creating a list of >> API/ABI clean-ups for whenever we find a compelling reason to release >> a new major version. In the meantime, we can leave the old names in >> and just add a deprecation note to the docs. Once we are ready to >> release 2.0, we can release a 1.x with deprecation warnings. >> > > This still leaves some incompatibilities; code written with the new > functions won't run on older releases of numpy so folks who need portability > will have to use the old names. Note that most Linux distros lag a good ways > behind the latest and greatest numpy. I say to wait for a major release to > add the new names and just leave the old ones alone. This all recalls the > hassle of going through all my old code changing from > Numeric->Numarray->Numpy. It wasn't difficult but it did consume time. > Maybe we could push all the changes off to a Numpy release compatible with Python 3.0. Folks will expect a certain amount of hassle when making that switch. Re portability: remember how much trouble it was making Numpy work on Python 2.3 after we used features introduced in later versions? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Nov 24 18:22:06 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 24 Nov 2008 15:22:06 -0800 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: References: <200811241945.56313.faltet@pytables.org> Message-ID: <1e2af89e0811241522y2ae4bb1ft8685c802fab08863@mail.gmail.com> Hi, I think this change could be confusing. numpy.asum numpy.arange numpy.amax etc all have the intended meaning of 'a-for-array-version-of-function'. This obviously isn't the case for 'acos'. Explaining the difference could be painful. Best, Matthew From robert.kern at gmail.com Mon Nov 24 18:23:09 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Nov 2008 17:23:09 -0600 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: References: <200811241945.56313.faltet@pytables.org> Message-ID: <3d375d730811241523p4365e202j5f67a321f22356ac@mail.gmail.com> On Mon, Nov 24, 2008 at 17:13, Charles R Harris wrote: > Maybe we could push all the changes off to a Numpy release compatible with > Python 3.0. Folks will expect a certain amount of hassle when making that > switch. Guido, et al., have specifically asked that projects not do this if they can at all avoid it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Nov 24 18:23:09 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Nov 2008 17:23:09 -0600 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: References: <200811241945.56313.faltet@pytables.org> Message-ID: <3d375d730811241523p4365e202j5f67a321f22356ac@mail.gmail.com> On Mon, Nov 24, 2008 at 17:13, Charles R Harris wrote: > Maybe we could push all the changes off to a Numpy release compatible with > Python 3.0. Folks will expect a certain amount of hassle when making that > switch. Guido, et al., have specifically asked that projects not do this if they can at all avoid it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Nov 24 18:23:09 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Nov 2008 17:23:09 -0600 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: References: <200811241945.56313.faltet@pytables.org> Message-ID: <3d375d730811241523p4365e202j5f67a321f22356ac@mail.gmail.com> On Mon, Nov 24, 2008 at 17:13, Charles R Harris wrote: > Maybe we could push all the changes off to a Numpy release compatible with > Python 3.0. Folks will expect a certain amount of hassle when making that > switch. Guido, et al., have specifically asked that projects not do this if they can at all avoid it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Mon Nov 24 19:40:16 2008 From: Chris.Barker at noaa.gov (Chris Barker) Date: Mon, 24 Nov 2008 19:40:16 -0500 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: <1e2af89e0811241522y2ae4bb1ft8685c802fab08863@mail.gmail.com> References: <200811241945.56313.faltet@pytables.org> <1e2af89e0811241522y2ae4bb1ft8685c802fab08863@mail.gmail.com> Message-ID: <492B4970.1010209@noaa.gov> Matthew Brett wrote: > numpy.asum numpy.arange numpy.amax etc all have the intended meaning > of 'a-for-array-version-of-function'. This obviously isn't the case > for 'acos'. actually, it is, isn't it? a version of math.cos that works for arrays? But anyway, if we had it to do all over again, I'd never suggest that anyone use "import *", and I would have called all those numpy.range, numpy.max, etc. no, I'm not suggesting that we break the API now.... -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Mon Nov 24 20:05:56 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Nov 2008 19:05:56 -0600 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: <492B4970.1010209@noaa.gov> References: <200811241945.56313.faltet@pytables.org> <1e2af89e0811241522y2ae4bb1ft8685c802fab08863@mail.gmail.com> <492B4970.1010209@noaa.gov> Message-ID: <3d375d730811241705q278951c7oa8dc83749729221c@mail.gmail.com> On Mon, Nov 24, 2008 at 18:40, Chris Barker wrote: > Matthew Brett wrote: >> numpy.asum numpy.arange numpy.amax etc all have the intended meaning >> of 'a-for-array-version-of-function'. This obviously isn't the case >> for 'acos'. > > actually, it is, isn't it? a version of math.cos that works for arrays? No. Not at all. acos() and arccos() are the inverse functions of cos(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Mon Nov 24 20:29:28 2008 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 24 Nov 2008 17:29:28 -0800 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: <3d375d730811241705q278951c7oa8dc83749729221c@mail.gmail.com> References: <200811241945.56313.faltet@pytables.org> <1e2af89e0811241522y2ae4bb1ft8685c802fab08863@mail.gmail.com> <492B4970.1010209@noaa.gov> <3d375d730811241705q278951c7oa8dc83749729221c@mail.gmail.com> Message-ID: <492B54F8.2040200@noaa.gov> Robert Kern wrote: > On Mon, Nov 24, 2008 at 18:40, Chris Barker wrote: >> actually, it is, isn't it? a version of math.cos that works for arrays? > > No. Not at all. acos() and arccos() are the inverse functions of cos(). argg!! total brain freeze there. Can I pretend I never wrote that? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Mon Nov 24 20:37:32 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Nov 2008 19:37:32 -0600 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: <492B54F8.2040200@noaa.gov> References: <200811241945.56313.faltet@pytables.org> <1e2af89e0811241522y2ae4bb1ft8685c802fab08863@mail.gmail.com> <492B4970.1010209@noaa.gov> <3d375d730811241705q278951c7oa8dc83749729221c@mail.gmail.com> <492B54F8.2040200@noaa.gov> Message-ID: <3d375d730811241737i13912a7m2cb5beab33be831@mail.gmail.com> On Mon, Nov 24, 2008 at 19:29, Christopher Barker wrote: > Robert Kern wrote: >> On Mon, Nov 24, 2008 at 18:40, Chris Barker wrote: >>> actually, it is, isn't it? a version of math.cos that works for arrays? >> >> No. Not at all. acos() and arccos() are the inverse functions of cos(). > > argg!! total brain freeze there. Can I pretend I never wrote that? Sure, but the Google never forgets. :-) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Mon Nov 24 22:18:17 2008 From: cournape at gmail.com (David Cournapeau) Date: Tue, 25 Nov 2008 12:18:17 +0900 Subject: [Numpy-discussion] Numpy on Mac OS X python 2.6 Message-ID: <5b8d13220811241918v62d26f1cte1adf163ef0a11d5@mail.gmail.com> Hi, Following the discussion on python 2.6 support for numpy, I tried last svn on mac os X, and I get a number of failures which I don't understand, which seem to be linked to dtype code, more exactly to endianness: ====================================================================== FAIL: test_basic (test_multiarray.TestClip) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/david/pylocal/lib/python2.6/site-packages/numpy/core/tests/test_multiarray.py", line 677, in test_basic self._clip_type('float',1024,-12.8,100.2, inplace=inplace) File "/Users/david/pylocal/lib/python2.6/site-packages/numpy/core/tests/test_multiarray.py", line 671, in _clip_type assert_equal(x.dtype.byteorder,byteorder) File "/Users/david/pylocal/lib/python2.6/site-packages/numpy/testing/utils.py", line 183, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: '>' DESIRED: '=' ====================================================================== FAIL: test_binary (test_multiarray.TestFromstring) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/david/pylocal/lib/python2.6/site-packages/numpy/core/tests/test_multiarray.py", line 120, in test_binary assert_array_equal(a, array([1,2,3,4])) File "/Users/david/pylocal/lib/python2.6/site-packages/numpy/testing/utils.py", line 303, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/Users/david/pylocal/lib/python2.6/site-packages/numpy/testing/utils.py", line 295, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not equal (mismatch 100.0%) x: array([ 4.60060299e-41, 8.96831017e-44, 2.30485571e-41, 4.60074312e-41], dtype=float32) y: array([1, 2, 3, 4]) ... http://scipy.org/scipy/numpy/ticket/958 Does anyone have a clue about where to look at ? I am wondering why it only appears on Mac OS X, David From charlesr.harris at gmail.com Mon Nov 24 22:41:31 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 24 Nov 2008 20:41:31 -0700 Subject: [Numpy-discussion] Numpy on Mac OS X python 2.6 In-Reply-To: <5b8d13220811241918v62d26f1cte1adf163ef0a11d5@mail.gmail.com> References: <5b8d13220811241918v62d26f1cte1adf163ef0a11d5@mail.gmail.com> Message-ID: On Mon, Nov 24, 2008 at 8:18 PM, David Cournapeau wrote: > Hi, > > Following the discussion on python 2.6 support for numpy, I tried last > svn on mac os X, and I get a number of failures which I don't > understand, which seem to be linked to dtype code, more exactly to > endianness: > > > ====================================================================== > FAIL: test_basic (test_multiarray.TestClip) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Users/david/pylocal/lib/python2.6/site-packages/numpy/core/tests/test_multiarray.py", > line 677, in test_basic > self._clip_type('float',1024,-12.8,100.2, inplace=inplace) > File > "/Users/david/pylocal/lib/python2.6/site-packages/numpy/core/tests/test_multiarray.py", > line 671, in _clip_type > assert_equal(x.dtype.byteorder,byteorder) > File > "/Users/david/pylocal/lib/python2.6/site-packages/numpy/testing/utils.py", > line 183, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: '>' > DESIRED: '=' > > ====================================================================== > FAIL: test_binary (test_multiarray.TestFromstring) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Users/david/pylocal/lib/python2.6/site-packages/numpy/core/tests/test_multiarray.py", > line 120, in test_binary > assert_array_equal(a, array([1,2,3,4])) > File > "/Users/david/pylocal/lib/python2.6/site-packages/numpy/testing/utils.py", > line 303, in assert_array_equal > verbose=verbose, header='Arrays are not equal') > File > "/Users/david/pylocal/lib/python2.6/site-packages/numpy/testing/utils.py", > line 295, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not equal > > (mismatch 100.0%) > x: array([ 4.60060299e-41, 8.96831017e-44, 2.30485571e-41, > 4.60074312e-41], dtype=float32) > y: array([1, 2, 3, 4]) > Sure enough, it's byteswapped. Is it correct that: 1) This problem is specific to 2.6 and 2.5 works. 2) It's on Intel hardware? What about normal doubles, do they work? Does python work? Is it possible there are two versions of (python, lib,...), one for ppc and the other for Intel that are getting confused? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmueller_dev at yahoo.com Mon Nov 24 22:49:19 2008 From: cmueller_dev at yahoo.com (Chris Mueller) Date: Mon, 24 Nov 2008 19:49:19 -0800 (PST) Subject: [Numpy-discussion] CorePy 1.0 Release (x86, Cell BE, BSD!) Message-ID: <550819.41862.qm@web111214.mail.gq1.yahoo.com> Announcing CorePy 1.0 - http://www.corepy.org We are pleased to announce the latest release of CorePy. CorePy is a complete system for developing machine-level programs in Python. CorePy lets developers build and execute assembly-level programs interactively from the Python command prompt, embed them directly in Python applications, or export them to standard assembly languages. CorePy's straightforward APIs enable the creation of complex, high-performance applications that take advantage of processor features usually inaccessible from high-level scripting languages, such as multi-core execution and vector instruction sets (SSE, VMX, SPU). This version addresses the two most frequently asked questions about CorePy: 1) Does CorePy support x86 processors? Yes! CorePy now has extensive support for 32/64-bit x86 and SSE ISAs on Linux and OS X*. 2) Is CorePy Open Source? Yes! CorePy now uses the standard BSD license. Of course, CorePy still supports PowerPC and Cell BE SPU processors. In fact, for this release, the Cell run-time was redesigned from the ground up to remove the dependency on IBM's libspe and now uses the system-level interfaces to work directly with the SPUs (and, CorePy is still the most fun way to program the PS3). CorePy is written almost entirely in Python. Its run-time system does not rely on any external compilers or assemblers. If you have the need to write tight, fast code from Python, want to demystify machine-level code generation, or just miss the good-old days of assembly hacking, check out CorePy! And, if you don't believe us, here's our favorite user quote: "CorePy makes assembly fun again!" __credits__ = """ CorePy is developed by Chris Mueller, Andrew Friedley, and Ben Martin and is supported by the Open Systems Lab at Indiana University. Chris can be reached at cmueller[underscore]dev[at]yahoo[dot]com. """ __footnote__ = """ *Any volunteers for a Windows port? :) """ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Nov 24 23:02:13 2008 From: cournape at gmail.com (David Cournapeau) Date: Tue, 25 Nov 2008 13:02:13 +0900 Subject: [Numpy-discussion] Numpy on Mac OS X python 2.6 In-Reply-To: References: <5b8d13220811241918v62d26f1cte1adf163ef0a11d5@mail.gmail.com> Message-ID: <5b8d13220811242002p376a4cd6vda7bb029ca9354a0@mail.gmail.com> On Tue, Nov 25, 2008 at 12:41 PM, Charles R Harris wrote: > > 1) This problem is specific to 2.6 and 2.5 works. Yes > 2) It's on Intel hardware? Yes. Here is a minimal test which shows the problem: import numpy as np assert np.dtype(' References: <5b8d13220811241918v62d26f1cte1adf163ef0a11d5@mail.gmail.com> <5b8d13220811242002p376a4cd6vda7bb029ca9354a0@mail.gmail.com> Message-ID: On Mon, Nov 24, 2008 at 9:02 PM, David Cournapeau wrote: > On Tue, Nov 25, 2008 at 12:41 PM, Charles R Harris > wrote: > > > > > 1) This problem is specific to 2.6 and 2.5 works. > > Yes > > > 2) It's on Intel hardware? > > Yes. > > Here is a minimal test which shows the problem: > > import numpy as np > assert np.dtype(' So what does dtype(float32).descr and dtype(float32).byteorder show? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Nov 24 23:58:12 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 24 Nov 2008 21:58:12 -0700 Subject: [Numpy-discussion] Numpy on Mac OS X python 2.6 In-Reply-To: References: <5b8d13220811241918v62d26f1cte1adf163ef0a11d5@mail.gmail.com> <5b8d13220811242002p376a4cd6vda7bb029ca9354a0@mail.gmail.com> Message-ID: On Mon, Nov 24, 2008 at 9:38 PM, Charles R Harris wrote: > > > On Mon, Nov 24, 2008 at 9:02 PM, David Cournapeau wrote: > >> On Tue, Nov 25, 2008 at 12:41 PM, Charles R Harris >> wrote: >> > >> >> > 1) This problem is specific to 2.6 and 2.5 works. >> >> Yes >> >> > 2) It's on Intel hardware? >> >> Yes. >> >> Here is a minimal test which shows the problem: >> >> import numpy as np >> assert np.dtype('> > > So what does dtype(float32).descr and dtype(float32).byteorder show? > Numpy gets it's byte order from the macro WORDS_BIGENDIAN defined by Python. Try $[charris at f9 numpy.git]$ grep -r -n WORDS_BIGENDIAN /usr/include/python2.5/* /usr/include/python2.5/pyconfig-32.h:902:#define WORDS_BIGENDIAN 1 /usr/include/python2.5/pyconfig-32.h:905:/* #undef WORDS_BIGENDIAN */ or the OS X equivalent. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Nov 25 00:06:56 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 24 Nov 2008 22:06:56 -0700 Subject: [Numpy-discussion] Numpy on Mac OS X python 2.6 In-Reply-To: References: <5b8d13220811241918v62d26f1cte1adf163ef0a11d5@mail.gmail.com> <5b8d13220811242002p376a4cd6vda7bb029ca9354a0@mail.gmail.com> Message-ID: On Mon, Nov 24, 2008 at 9:58 PM, Charles R Harris wrote: > > > On Mon, Nov 24, 2008 at 9:38 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Mon, Nov 24, 2008 at 9:02 PM, David Cournapeau wrote: >> >>> On Tue, Nov 25, 2008 at 12:41 PM, Charles R Harris >>> wrote: >>> > >>> >>> > 1) This problem is specific to 2.6 and 2.5 works. >>> >>> Yes >>> >>> > 2) It's on Intel hardware? >>> >>> Yes. >>> >>> Here is a minimal test which shows the problem: >>> >>> import numpy as np >>> assert np.dtype('>> >> >> So what does dtype(float32).descr and dtype(float32).byteorder show? >> > > Numpy gets it's byte order from the macro WORDS_BIGENDIAN defined by > Python. Try > > $[charris at f9 numpy.git]$ grep -r -n WORDS_BIGENDIAN > /usr/include/python2.5/* > /usr/include/python2.5/pyconfig-32.h:902:#define WORDS_BIGENDIAN 1 > /usr/include/python2.5/pyconfig-32.h:905:/* #undef WORDS_BIGENDIAN */ > > or the OS X equivalent. > Well, it may not be that easy to figure. The (generated) pyconfig-32.h has /* Define to 1 if your processor stores words with the most significant byte first (like Motorola and SPARC, unlike Intel and VAX). The block below does compile-time checking for endianness on platforms that use GCC and therefore allows compiling fat binaries on OSX by using '-arch ppc -arch i386' as the compile flags. The phrasing was choosen such that the configure-result is used on systems that don't use GCC. */ #ifdef __BIG_ENDIAN__ #define WORDS_BIGENDIAN 1 #else #ifndef __LITTLE_ENDIAN__ /* #undef WORDS_BIGENDIAN */ #endif #endif And I guess that __BIG_ENDIAN__ is a compiler flag, it isn't in any of the include files. In any case, this looks like a Python bug or the Python folks have switched their API on us. Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg.net at gmail.com Tue Nov 25 00:36:56 2008 From: ellisonbg.net at gmail.com (Brian Granger) Date: Mon, 24 Nov 2008 21:36:56 -0800 Subject: [Numpy-discussion] CorePy 1.0 Release (x86, Cell BE, BSD!) In-Reply-To: <550819.41862.qm@web111214.mail.gq1.yahoo.com> References: <550819.41862.qm@web111214.mail.gq1.yahoo.com> Message-ID: <6ce0ac130811242136vb7ce3a0wf01b2f680ab1ae97@mail.gmail.com> Chris, Wow, this is fantastic...both the BSD license and the x86 support. I look forward to playing with this! Cheers, Brian On Mon, Nov 24, 2008 at 7:49 PM, Chris Mueller wrote: > Announcing CorePy 1.0 - http://www.corepy.org > > We are pleased to announce the latest release of CorePy. CorePy is a > complete system for developing machine-level programs in Python. > CorePy lets developers build and execute assembly-level programs > interactively from the Python command prompt, embed them directly in > Python applications, or export them to standard assembly languages. > > CorePy's straightforward APIs enable the creation of complex, > high-performance applications that take advantage of processor > features usually inaccessible from high-level scripting languages, > such as multi-core execution and vector instruction sets (SSE, VMX, > SPU). > > This version addresses the two most frequently asked questions about > CorePy: > > 1) Does CorePy support x86 processors? > Yes! CorePy now has extensive support for 32/64-bit x86 and SSE > ISAs on Linux and OS X*. > > 2) Is CorePy Open Source? > Yes! CorePy now uses the standard BSD license. > > Of course, CorePy still supports PowerPC and Cell BE SPU processors. > In fact, for this release, the Cell run-time was redesigned from the > ground up to remove the dependency on IBM's libspe and now uses the > system-level interfaces to work directly with the SPUs (and, CorePy is > still the most fun way to program the PS3). > > CorePy is written almost entirely in Python. Its run-time system > does not rely on any external compilers or assemblers. > > If you have the need to write tight, fast code from Python, want > to demystify machine-level code generation, or just miss the good-old > days of assembly hacking, check out CorePy! > > And, if you don't believe us, here's our favorite user quote: > > "CorePy makes assembly fun again!" > > > __credits__ = """ > CorePy is developed by Chris Mueller, Andrew Friedley, and Ben > Martin and is supported by the Open Systems Lab at Indiana > University. > > Chris can be reached at cmueller[underscore]dev[at]yahoo[dot]com. > """ > > __footnote__ = """ > *Any volunteers for a Windows port? :) > """ > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > From cournapeau at cslab.kecl.ntt.co.jp Tue Nov 25 02:10:51 2008 From: cournapeau at cslab.kecl.ntt.co.jp (David Cournapeau) Date: Tue, 25 Nov 2008 16:10:51 +0900 Subject: [Numpy-discussion] Numpy on Mac OS X python 2.6 In-Reply-To: References: <5b8d13220811241918v62d26f1cte1adf163ef0a11d5@mail.gmail.com> <5b8d13220811242002p376a4cd6vda7bb029ca9354a0@mail.gmail.com> Message-ID: <1227597051.13237.4.camel@bbc8> On Mon, 2008-11-24 at 21:38 -0700, Charles R Harris wrote: > > > On Mon, Nov 24, 2008 at 9:02 PM, David Cournapeau > wrote: > On Tue, Nov 25, 2008 at 12:41 PM, Charles R Harris > wrote: > > > > > 1) This problem is specific to 2.6 and 2.5 works. > > > Yes > > > 2) It's on Intel hardware? > > > Yes. > > Here is a minimal test which shows the problem: > > import numpy as np > assert np.dtype(' > So what does dtype(float32).descr and dtype(float32).byteorder show? The expected: [('', ' References: <5b8d13220811241918v62d26f1cte1adf163ef0a11d5@mail.gmail.com> <5b8d13220811242002p376a4cd6vda7bb029ca9354a0@mail.gmail.com> Message-ID: <1227597580.13237.12.camel@bbc8> On Mon, 2008-11-24 at 22:06 -0700, Charles R Harris wrote: > > > Well, it may not be that easy to figure. The (generated) > pyconfig-32.h has > > /* Define to 1 if your processor stores words with the most > significant byte > first (like Motorola and SPARC, unlike Intel and VAX). > > The block below does compile-time checking for endianness on > platforms > that use GCC and therefore allows compiling fat binaries on OSX by > using > '-arch ppc -arch i386' as the compile flags. The phrasing was > choosen > such that the configure-result is used on systems that don't use > GCC. > */ > #ifdef __BIG_ENDIAN__ > #define WORDS_BIGENDIAN 1 > #else > #ifndef __LITTLE_ENDIAN__ > /* #undef WORDS_BIGENDIAN */ > #endif > #endif > Hm, interesting: just by grepping, I do have WORDS_BIGENDIAN defined to 1 on *both* python 2.5 and python 2.6 on Mac OS X (running Intel). Looking closer, I do have the above code (conditional) in 2.5, but not in 2.6: it is inconditionally defined to BIGENDIAN on 2.6 !! That's actually part of something I have wondered for quite some time about fat binaries: how do you handle config headers, since they are generated only once for every fat binary, but they should really be generated for each arch. > And I guess that __BIG_ENDIAN__ is a compiler flag, it isn't in any > of the include files. In any case, this looks like a Python bug or the > Python folks have switched their API on us. Hm, actually, it is a bug in numpy as much as in python: python should NOT include any config.h in their public namespace, and we should not rely on it. But with this info, it should be relatively easy to fix (by setting the correct endianness by ourselves with some detection code) David From faltet at pytables.org Tue Nov 25 04:15:32 2008 From: faltet at pytables.org (Francesc Alted) Date: Tue, 25 Nov 2008 10:15:32 +0100 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: References: <200811241945.56313.faltet@pytables.org> Message-ID: <200811251015.33341.faltet@pytables.org> A Monday 24 November 2008, Jarrod Millman escrigu?: > On Mon, Nov 24, 2008 at 10:45 AM, Francesc Alted wrote: > > So, IMHO, I think it would be better to rename the inverse > > trigonometric functions from ``arc*`` to ``a*`` prefix. Of course, > > in order to do that correctly, one should add the new names and add > > a > > ``DeprecationWarning`` informing that people should start to use > > the new names. After two or three NumPy versions, the old function > > names can be removed safely. > > > > What people think? > > +1 > It seems there is a fair amount of favor for adding the new names. > There is some resistance to removing the old ones. I would be happy > to deprecate the old ones, but leave them in until we release a new > major release (i.e., NumPy 2.0.0). We could start creating a list of > API/ABI clean-ups for whenever we find a compelling reason to release > a new major version. In the meantime, we can leave the old names in > and just add a deprecation note to the docs. Once we are ready to > release 2.0, we can release a 1.x with deprecation warnings. Sounds like a plan. +1 on this. If there are worries about portability issues, I'd even let the old names in 2.0 (with the deprecation warning, of course), although if the 1.x series are going to live long time (say, at least, a year), I don't think this is going to be necessary. -- Francesc Alted From pgmdevlist at gmail.com Tue Nov 25 06:26:54 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 25 Nov 2008 06:26:54 -0500 Subject: [Numpy-discussion] Numpy on Mac OS X python 2.6 In-Reply-To: <1227597580.13237.12.camel@bbc8> References: <5b8d13220811241918v62d26f1cte1adf163ef0a11d5@mail.gmail.com> <5b8d13220811242002p376a4cd6vda7bb029ca9354a0@mail.gmail.com> <1227597580.13237.12.camel@bbc8> Message-ID: FYI, I can't reproduce David's failures on my machine (intel core2 duo w/ 10.5.5) * python 2.6 from macports * numpy svn 6098 * GCC 4.0.1 (Apple Inc. build 5488) I have only 1 failure: FAIL: test_umath.TestComplexFunctions.test_against_cmath ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/local/lib/python2.6/site-packages/nose-0.10.4-py2.6.egg/ nose/case.py", line 182, in runTest self.test(*self.arg) File "/Users/pierregm/Computing/.pythonenvs/default26/lib/python2.6/ site-packages/numpy/core/tests/test_umath.py", line 423, in test_against_cmath assert abs(a - b) < atol, "%s %s: %s; cmath: %s"%(fname,p,a,b) AssertionError: arcsin 2: (1.57079632679-1.31695789692j); cmath: (1.57079632679+1.31695789692j) ---------------------------------------------------------------------- (Well, there's another one in numpy.ma.min, but that's a different matter). On Nov 25, 2008, at 2:19 AM, David Cournapeau wrote: > On Mon, 2008-11-24 at 22:06 -0700, Charles R Harris wrote: >> >> >> Well, it may not be that easy to figure. The (generated) >> pyconfig-32.h has >> >> /* Define to 1 if your processor stores words with the most >> significant byte >> first (like Motorola and SPARC, unlike Intel and VAX). >> >> The block below does compile-time checking for endianness on >> platforms >> that use GCC and therefore allows compiling fat binaries on OSX by >> using >> '-arch ppc -arch i386' as the compile flags. The phrasing was >> choosen >> such that the configure-result is used on systems that don't use >> GCC. >> */ >> #ifdef __BIG_ENDIAN__ >> #define WORDS_BIGENDIAN 1 >> #else >> #ifndef __LITTLE_ENDIAN__ >> /* #undef WORDS_BIGENDIAN */ >> #endif >> #endif >> > > Hm, interesting: just by grepping, I do have WORDS_BIGENDIAN defined > to > 1 on *both* python 2.5 and python 2.6 on Mac OS X (running Intel). > Looking closer, I do have the above code (conditional) in 2.5, but not > in 2.6: it is inconditionally defined to BIGENDIAN on 2.6 !! That's > actually part of something I have wondered for quite some time about > fat > binaries: how do you handle config headers, since they are generated > only once for every fat binary, but they should really be generated > for > each arch. > >> And I guess that __BIG_ENDIAN__ is a compiler flag, it isn't in any >> of the include files. In any case, this looks like a Python bug or >> the >> Python folks have switched their API on us. > > Hm, actually, it is a bug in numpy as much as in python: python should > NOT include any config.h in their public namespace, and we should not > rely on it. > > But with this info, it should be relatively easy to fix (by setting > the > correct endianness by ourselves with some detection code) > > David > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From david at ar.media.kyoto-u.ac.jp Tue Nov 25 07:25:21 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 25 Nov 2008 21:25:21 +0900 Subject: [Numpy-discussion] Numpy on Mac OS X python 2.6 In-Reply-To: References: <5b8d13220811241918v62d26f1cte1adf163ef0a11d5@mail.gmail.com> <5b8d13220811242002p376a4cd6vda7bb029ca9354a0@mail.gmail.com> <1227597580.13237.12.camel@bbc8> Message-ID: <492BEEB1.4020201@ar.media.kyoto-u.ac.jp> Pierre GM wrote: > FYI, > I can't reproduce David's failures on my machine (intel core2 duo w/ > 10.5.5) > * python 2.6 from macports > I think that's the main difference. I feel more and more that the problem is linked to fat binaries (more exactly multi arch build in one autoconf run: since only one pyconfig.h is generated for all archs, only one value is defined for CPU specific configurations). On my machine, pyconfig.h has WORDS_BIGENDIAN defined to one, which I can only explain by the binary being built on ppc (unfortunately, I can't find this information from python itself - maybe in the release notes). And that cannot work on Intel. The general solution would be to generate different arch specific config files, and import them conditionally in the main config file. But doing so in a platform-neutral manner is not trivial, David From stefan at sun.ac.za Tue Nov 25 08:17:06 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 25 Nov 2008 15:17:06 +0200 Subject: [Numpy-discussion] PIL.Image.fromarray bug in numpy interface In-Reply-To: <492AEAEF.9040405@noaa.gov> References: <492AC917.9020405@noaa.gov> <492AD6EF.6030100@enthought.com> <492AE3C4.5010301@noaa.gov> <3d375d730811240937i584c9d4dhfbe8b24f5bd22c5@mail.gmail.com> <492AEAEF.9040405@noaa.gov> Message-ID: <9457e7c80811250517y91d3be4s448f002b254e8ac1@mail.gmail.com> 2008/11/24 Chris Barker : > Robert Kern wrote: >>> Jim Vickroy wrote: >>> While using the PIL interface to numpy, I rediscovered a logic error >>> in the PIL.Image.fromarray() procedure. The problem (and a solution) >>> was mentioned earlier at: > >> Tell them that we approve of the change. We don't have commit access >> to PIL, so I believe that our approval is the only reason they could >> possibly send you over here. > > Just for the record, it was me that "sent him over here". I thought it > would be good for a numpy dev to check out the patch for correctnesses > -- it looked like a numpy API issue, and I figured Fredrik wouldn't want > to look too hard at it to determine if it was correct. > > so if you "approval" means you've looked at the fix and think it's > correct, great! I also submitted an issue in 2007: http://mail.python.org/pipermail/image-sig/2007-August/004570.html I recently reminded Frederik, who replied: "The NumPy support was somewhat broken and has been partially rewritten for PIL 1.2; I'll compare those fixes with your patch when I find the time." So, I guess we should try the latest PIL and see if the problems are still there? Cheers St?fan From stefan at sun.ac.za Tue Nov 25 08:24:20 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 25 Nov 2008 15:24:20 +0200 Subject: [Numpy-discussion] working on multiple matrices of the same shape In-Reply-To: <78f7ab620811240827h69beb75cub17a1d38a1e6695@mail.gmail.com> References: <78f7ab620811210157l15e14c76r85c7b23a6a010a89@mail.gmail.com> <9457e7c80811240252u48b0481dy194b611fe7865b42@mail.gmail.com> <78f7ab620811240827h69beb75cub17a1d38a1e6695@mail.gmail.com> Message-ID: <9457e7c80811250524m6f2101dalfec7a58b597c8108@mail.gmail.com> 2008/11/24 S?bastien Barth?lemy : > Are you sure ? Here it reports > ValueError: setting an array element with a sequence. > probably because theta, sintheta and costheta are 1-d arrays of n>1 elements. Sorry, I missed that detail. Cheers St?fan From matthieu.brucher at gmail.com Tue Nov 25 08:34:09 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 25 Nov 2008 14:34:09 +0100 Subject: [Numpy-discussion] CorePy 1.0 Release (x86, Cell BE, BSD!) In-Reply-To: <6ce0ac130811242136vb7ce3a0wf01b2f680ab1ae97@mail.gmail.com> References: <550819.41862.qm@web111214.mail.gq1.yahoo.com> <6ce0ac130811242136vb7ce3a0wf01b2f680ab1ae97@mail.gmail.com> Message-ID: Exactly what I thought this morning ;) I'm reading your PhD thesis, Chris, it's great ! Matthieu 2008/11/25 Brian Granger : > Chris, > > Wow, this is fantastic...both the BSD license and the x86 support. I > look forward to playing with this! > > Cheers, > > Brian > > On Mon, Nov 24, 2008 at 7:49 PM, Chris Mueller wrote: >> Announcing CorePy 1.0 - http://www.corepy.org >> >> We are pleased to announce the latest release of CorePy. CorePy is a >> complete system for developing machine-level programs in Python. >> CorePy lets developers build and execute assembly-level programs >> interactively from the Python command prompt, embed them directly in >> Python applications, or export them to standard assembly languages. >> >> CorePy's straightforward APIs enable the creation of complex, >> high-performance applications that take advantage of processor >> features usually inaccessible from high-level scripting languages, >> such as multi-core execution and vector instruction sets (SSE, VMX, >> SPU). >> >> This version addresses the two most frequently asked questions about >> CorePy: >> >> 1) Does CorePy support x86 processors? >> Yes! CorePy now has extensive support for 32/64-bit x86 and SSE >> ISAs on Linux and OS X*. >> >> 2) Is CorePy Open Source? >> Yes! CorePy now uses the standard BSD license. >> >> Of course, CorePy still supports PowerPC and Cell BE SPU processors. >> In fact, for this release, the Cell run-time was redesigned from the >> ground up to remove the dependency on IBM's libspe and now uses the >> system-level interfaces to work directly with the SPUs (and, CorePy is >> still the most fun way to program the PS3). >> >> CorePy is written almost entirely in Python. Its run-time system >> does not rely on any external compilers or assemblers. >> >> If you have the need to write tight, fast code from Python, want >> to demystify machine-level code generation, or just miss the good-old >> days of assembly hacking, check out CorePy! >> >> And, if you don't believe us, here's our favorite user quote: >> >> "CorePy makes assembly fun again!" >> >> >> __credits__ = """ >> CorePy is developed by Chris Mueller, Andrew Friedley, and Ben >> Martin and is supported by the Open Systems Lab at Indiana >> University. >> >> Chris can be reached at cmueller[underscore]dev[at]yahoo[dot]com. >> """ >> >> __footnote__ = """ >> *Any volunteers for a Windows port? :) >> """ >> >> >> >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> >> > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From david at ar.media.kyoto-u.ac.jp Tue Nov 25 08:55:25 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 25 Nov 2008 22:55:25 +0900 Subject: [Numpy-discussion] Numpy on Mac OS X python 2.6 In-Reply-To: <492BEEB1.4020201@ar.media.kyoto-u.ac.jp> References: <5b8d13220811241918v62d26f1cte1adf163ef0a11d5@mail.gmail.com> <5b8d13220811242002p376a4cd6vda7bb029ca9354a0@mail.gmail.com> <1227597580.13237.12.camel@bbc8> <492BEEB1.4020201@ar.media.kyoto-u.ac.jp> Message-ID: <492C03CD.6020301@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > Pierre GM wrote: > >> FYI, >> I can't reproduce David's failures on my machine (intel core2 duo w/ >> 10.5.5) >> * python 2.6 from macports >> >> > > I think that's the main difference. I feel more and more that the > problem is linked to fat binaries (more exactly multi arch build in one > autoconf run: since only one pyconfig.h is generated for all archs, only > one value is defined for CPU specific configurations). On my machine, > pyconfig.h has WORDS_BIGENDIAN defined to one, which I can only explain > by the binary being built on ppc (unfortunately, I can't find this > information from python itself - maybe in the release notes). And that > cannot work on Intel. Ok, I think I fixed the problem in the dynamic_cpu_configuration branch. I get only two test failures, which appear also on windows and linux (the same as yours). I think the code is OK, but if anyone has two minutes to review it, it would be better before merging it into the trunk. I used the path of least resistance: instead of using the WORDS_BIGENDIAN macro, I added a numpy header which gives the endianness every time it is included. IOW, instead of the endianness to be fixed at numpy build time (which would fail for universal builds), it is set everytime the numpy headers are included (which is the only way to make it work). A better solution IMO would be to avoid any endianness dependency at all in the headers, but that does not seem possible without breaking the API (because the endianness-related macro PyArray_NBO and co would need to be set as functions instead). cheers, David David From rmay31 at gmail.com Tue Nov 25 09:46:58 2008 From: rmay31 at gmail.com (Ryan May) Date: Tue, 25 Nov 2008 08:46:58 -0600 Subject: [Numpy-discussion] More loadtxt() changes Message-ID: Hi, I have a couple more changes to loadtxt() that I'd like to code up in time for 1.3, but I thought I should run them by the list before doing too much work. These are already implemented in some fashion in matplotlib.mlab.csv2rec(), but the code bases are different enough, that pretty much only the idea can be lifted. All of these changes would be done in a manner that is backwards compatible with the current API. 1) Support for setting the names of fields in the returned structured array without using dtype. This can be a passed in list of names or reading the names of fields from the first line of the file. Many files have a header line that gives a name for each column. Adding this would obviously make loadtxt much more general and allow for more generic code, IMO. My current thinking is to add a *name* keyword parameter that defaults to None, for no support for reading names. Setting it to True would tell loadtxt() to read the names from the first line (after skiprows). The other option would be to set names to a list of strings. 2) Support for automatic dtype inference. Instead of assuming all values are floats, this would try a list of options until one worked. For strings, this would keep track of the longest string within a given field before setting the dtype. This would allow reading of files containing a mixture of types much more easily, without having to go to the trouble of constructing a full dtype by hand. This would work alongside any custom converters one passes in. My current thinking of API would just be to add the option of passing the string 'auto' as the dtype parameter. 3) Better support for missing values. The docstring mentions a way of handling missing values by passing in a converter. The problem with this is that you have to pass in a converter for *every column* that will contain missing values. If you have a text file with 50 columns, writing this dictionary of converters seems like ugly and needless boilerplate. I'm unsure of how best to pass in both what values indicate missing values and what values to fill in their place. I'd love suggestions Here's an example of my use case (without 50 columns): ID,First Name,Last Name,Homework1,Homework2,Quiz1,Homework3,Final 1234,Joe,Smith,85,90,,76, 5678,Jane,Doe,65,99,,78, 9123,Joe,Plumber,45,90,,92, Currently reading in this code requires a bit of boilerplace (declaring dtypes, converters). While it's nothing I can't write, it still would be easier to write it once within loadtxt and have it for everyone. Any support for *any* of these ideas? Any suggestions on how the user should pass in the information? Thanks, Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Tue Nov 25 10:03:29 2008 From: cournape at gmail.com (David Cournapeau) Date: Wed, 26 Nov 2008 00:03:29 +0900 Subject: [Numpy-discussion] Numpy on Mac OS X python 2.6 In-Reply-To: <492C03CD.6020301@ar.media.kyoto-u.ac.jp> References: <5b8d13220811241918v62d26f1cte1adf163ef0a11d5@mail.gmail.com> <5b8d13220811242002p376a4cd6vda7bb029ca9354a0@mail.gmail.com> <1227597580.13237.12.camel@bbc8> <492BEEB1.4020201@ar.media.kyoto-u.ac.jp> <492C03CD.6020301@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220811250703r37659331i2596be863617813a@mail.gmail.com> On Tue, Nov 25, 2008 at 10:55 PM, David Cournapeau wrote: > > I used the path of least resistance: instead of using the > WORDS_BIGENDIAN macro, I added a numpy header which gives the endianness > every time it is included. IOW, instead of the endianness to be fixed at > numpy build time (which would fail for universal builds), it is set > everytime the numpy headers are included (which is the only way to make > it work). A better solution IMO would be to avoid any endianness > dependency at all in the headers, but that does not seem possible > without breaking the API (because the endianness-related macro > PyArray_NBO and co would need to be set as functions instead). Hm, for reference, I came across this: http://www.mail-archive.com/python-dev at python.org/msg14382.html So some people thought about the same problem. David From charlesr.harris at gmail.com Tue Nov 25 10:59:32 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 25 Nov 2008 08:59:32 -0700 Subject: [Numpy-discussion] Numpy on Mac OS X python 2.6 In-Reply-To: <5b8d13220811250703r37659331i2596be863617813a@mail.gmail.com> References: <5b8d13220811241918v62d26f1cte1adf163ef0a11d5@mail.gmail.com> <5b8d13220811242002p376a4cd6vda7bb029ca9354a0@mail.gmail.com> <1227597580.13237.12.camel@bbc8> <492BEEB1.4020201@ar.media.kyoto-u.ac.jp> <492C03CD.6020301@ar.media.kyoto-u.ac.jp> <5b8d13220811250703r37659331i2596be863617813a@mail.gmail.com> Message-ID: On Tue, Nov 25, 2008 at 8:03 AM, David Cournapeau wrote: > On Tue, Nov 25, 2008 at 10:55 PM, David Cournapeau > wrote: > > > > > I used the path of least resistance: instead of using the > > WORDS_BIGENDIAN macro, I added a numpy header which gives the endianness > > every time it is included. IOW, instead of the endianness to be fixed at > > numpy build time (which would fail for universal builds), it is set > > everytime the numpy headers are included (which is the only way to make > > it work). A better solution IMO would be to avoid any endianness > > dependency at all in the headers, but that does not seem possible > > without breaking the API (because the endianness-related macro > > PyArray_NBO and co would need to be set as functions instead). > > Hm, for reference, I came across this: > > http://www.mail-archive.com/python-dev at python.org/msg14382.html > > So some people thought about the same problem. > Apart from the Mac, the ppc can be configured to run either bigendian or littleendian, so the hardware encompasses more than just the cpu, it's the whole darn board. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Nov 25 11:13:28 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 25 Nov 2008 18:13:28 +0200 Subject: [Numpy-discussion] Bilateral filter In-Reply-To: <1218023951.26827.10.camel@nadav.envision.co.il> References: <1217950721.21145.26.camel@nadav.envision.co.il> <4898C134.6060201@enthought.com> <1218023951.26827.10.camel@nadav.envision.co.il> Message-ID: <9457e7c80811250813h70cb557g27c257b8b40d5161@mail.gmail.com> Hi Nadav 2008/8/6 Nadav Horesh : > I made the following modification to the source code, I hope it is ready to > be included in scipy. > > Added a BSD licence declaration. > Small optimisation. > The code is split into a cython back-end and a python front-end. > > All remarks are welcome, Thanks for working on a bilateral filter implementation. Some comments: 1. Needs a setup.py file to build the Cython module (simplest possible is attached) 2. numpy.numarray.nd_image should be scipy.ndimage 3. For inclusion in SciPy, we'll need some tests and preferably some examples. 4. Docstrings should be in SciPy format. 5. ndarray.h should be numpy/ndarray.h Thanks for writing this filter; I found it useful! Cheers St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: setup.py Type: text/x-python Size: 394 bytes Desc: not available URL: From perry at stsci.edu Tue Nov 25 11:41:49 2008 From: perry at stsci.edu (Perry Greenfield) Date: Tue, 25 Nov 2008 11:41:49 -0500 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: References: <200811241945.56313.faltet@pytables.org> Message-ID: <00CC2341-CBB8-4588-87F3-47A6CF249030@stsci.edu> On Nov 24, 2008, at 5:55 PM, Jarrod Millman wrote: > On Mon, Nov 24, 2008 at 10:45 AM, Francesc Alted > wrote: >> So, IMHO, I think it would be better to rename the inverse >> trigonometric >> functions from ``arc*`` to ``a*`` prefix. Of course, in order to do >> that correctly, one should add the new names and add a >> ``DeprecationWarning`` informing that people should start to use the >> new names. After two or three NumPy versions, the old function names >> can be removed safely. >> >> What people think? > > +1 > It seems there is a fair amount of favor for adding the new names. > There is some resistance to removing the old ones. I would be happy > to deprecate the old ones, but leave them in until we release a new > major release (i.e., NumPy 2.0.0). We could start creating a list of > API/ABI clean-ups for whenever we find a compelling reason to release > a new major version. In the meantime, we can leave the old names in > and just add a deprecation note to the docs. Once we are ready to > release 2.0, we can release a 1.x with deprecation warnings. > I tend to favor this approach. Perry From cournape at gmail.com Tue Nov 25 11:55:43 2008 From: cournape at gmail.com (David Cournapeau) Date: Wed, 26 Nov 2008 01:55:43 +0900 Subject: [Numpy-discussion] Numpy on Mac OS X python 2.6 In-Reply-To: References: <5b8d13220811241918v62d26f1cte1adf163ef0a11d5@mail.gmail.com> <1227597580.13237.12.camel@bbc8> <492BEEB1.4020201@ar.media.kyoto-u.ac.jp> <492C03CD.6020301@ar.media.kyoto-u.ac.jp> <5b8d13220811250703r37659331i2596be863617813a@mail.gmail.com> Message-ID: <5b8d13220811250855o58c92dbdr7fd87bcc8dc98f73@mail.gmail.com> On Wed, Nov 26, 2008 at 12:59 AM, Charles R Harris wrote: > > Apart from the Mac, the ppc can be configured to run either bigendian or > littleendian, so the hardware encompasses more than just the cpu, it's the > whole darn board. Yep, many CPU families have double endian support (MIPS, ARM, PA-RISC, ALPHA. There is also "mixed" endian. Honestly, I think it is safe to assume that we don't need to care so much about those configurations for the time being. If it is a problem, we can then discuss about our headers being endian-free (which is really the best approach). David From Joris.DeRidder at ster.kuleuven.be Tue Nov 25 12:04:19 2008 From: Joris.DeRidder at ster.kuleuven.be (Joris De Ridder) Date: Tue, 25 Nov 2008 18:04:19 +0100 Subject: [Numpy-discussion] Proposal for changing the names of inverse trigonometrical/hyperbolic functions In-Reply-To: <200811241945.56313.faltet@pytables.org> References: <200811241945.56313.faltet@pytables.org> Message-ID: <50E6D43A-C863-4D2B-9440-9DD342A3D38C@ster.kuleuven.be> On 24 Nov 2008, at 19:45 , Francesc Alted wrote: > standards in computer science. For example, where Python writes: > > asin, acos, atan, asinh, acosh, atanh > > NumPy choose: > > arcsin, arccos, arctan, arcsinh, arccosh, arctanh > > So, IMHO, I think it would be better to rename the inverse > trigonometric > functions from ``arc*`` to ``a*`` prefix. -1 The current slightly deviating (and in fact more clear) naming convention of Numpy is IMO not even remotely enough reason to break the API. Adding honey by introducing a transition period with a deprecation warning postpones but doesn't avoid breaking the API. Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From pgmdevlist at gmail.com Tue Nov 25 12:14:46 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 25 Nov 2008 12:14:46 -0500 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: References: Message-ID: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> Ryan, FYI, I've been coding over the last couple of weeks an extension of loadtxt for a better support of masked data, with the option to read column names in a header. Please find an example below (I also have unittest). Most of the work is actually inspired from matplotlib's mlab.csv2rec. It might be worth not duplicating efforts. Cheers, P. -------------- next part -------------- A non-text attachment was scrubbed... Name: _preview.py Type: text/x-python-script Size: 16095 bytes Desc: not available URL: -------------- next part -------------- On Nov 25, 2008, at 9:46 AM, Ryan May wrote: > Hi, > > I have a couple more changes to loadtxt() that I'd like to code up > in time for 1.3, but I thought I should run them by the list before > doing too much work. These are already implemented in some fashion > in matplotlib.mlab.csv2rec(), but the code bases are different > enough, that pretty much only the idea can be lifted. All of these > changes would be done in a manner that is backwards compatible with > the current API. > > 1) Support for setting the names of fields in the returned > structured array without using dtype. This can be a passed in list > of names or reading the names of fields from the first line of the > file. Many files have a header line that gives a name for each > column. Adding this would obviously make loadtxt much more general > and allow for more generic code, IMO. My current thinking is to add > a *name* keyword parameter that defaults to None, for no support for > reading names. Setting it to True would tell loadtxt() to read the > names from the first line (after skiprows). The other option would > be to set names to a list of strings. > > 2) Support for automatic dtype inference. Instead of assuming all > values are floats, this would try a list of options until one > worked. For strings, this would keep track of the longest string > within a given field before setting the dtype. This would allow > reading of files containing a mixture of types much more easily, > without having to go to the trouble of constructing a full dtype by > hand. This would work alongside any custom converters one passes > in. My current thinking of API would just be to add the option of > passing the string 'auto' as the dtype parameter. > > 3) Better support for missing values. The docstring mentions a way > of handling missing values by passing in a converter. The problem > with this is that you have to pass in a converter for *every column* > that will contain missing values. If you have a text file with 50 > columns, writing this dictionary of converters seems like ugly and > needless boilerplate. I'm unsure of how best to pass in both what > values indicate missing values and what values to fill in their > place. I'd love suggestions > > Here's an example of my use case (without 50 columns): > > ID,First Name,Last Name,Homework1,Homework2,Quiz1,Homework3,Final > 1234,Joe,Smith,85,90,,76, > 5678,Jane,Doe,65,99,,78, > 9123,Joe,Plumber,45,90,,92, > > Currently reading in this code requires a bit of boilerplace > (declaring dtypes, converters). While it's nothing I can't write, > it still would be easier to write it once within loadtxt and have it > for everyone. > > Any support for *any* of these ideas? Any suggestions on how the > user should pass in the information? > > Thanks, > > Ryan > > -- > Ryan May > Graduate Research Assistant > School of Meteorology > University of Oklahoma > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From pgmdevlist at gmail.com Tue Nov 25 12:23:33 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 25 Nov 2008 12:23:33 -0500 Subject: [Numpy-discussion] in(np.nan) on python 2.6 Message-ID: <0866E45B-F0D1-4076-8D91-697F0A5D99D3@gmail.com> All, Sorry to bump my own post, and I was kinda threadjacking anyway: Some functions of numy.ma (eg, ma.max, ma.min...) accept explicit outputs that may not be MaskedArrays. When such an explicit output is not a MaskedArray, a value that should have been masked is transformed into np.nan. That worked great in 2.5, with np.nan automatically transformed to 0 when the explicit output had a int dtype. With Python 2.6, a ValueError is raised instead, as np.nan can no longer be casted to int. What should be the recommended behavior in this case ? Raise a ValueError or some other exception, to follow the new Python2.6 convention, or silently replace np.nan by some value acceptable by int dtype (0, or something else) ? Thanks for any suggestion, P. From Chris.Barker at noaa.gov Tue Nov 25 12:30:10 2008 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 25 Nov 2008 09:30:10 -0800 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> Message-ID: <492C3622.8020908@noaa.gov> Pierre GM wrote: > FYI, I've been coding over the last couple of weeks an extension of > loadtxt for a better support of masked data, with the option to read > column names in a header. Please find an example below great, thanks! this could be very useful to me. Two comments: """ missing : string, optional A string representing a missing value, irrespective of the column where it appears (e.g., ``'missing'`` or ``'unused'``. """ It might be nice if "missing" could be a sequence of strings, if there is more than one value for missing values, that are not clearly mapped to a particular field. """ missing_values : {None, dictionary}, optional A dictionary mapping a column number to a string indicating whether the corresponding field should be masked. """ would it possible to specify column header, rather than number here? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From pgmdevlist at gmail.com Tue Nov 25 13:16:31 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 25 Nov 2008 13:16:31 -0500 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <492C3622.8020908@noaa.gov> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> Message-ID: <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> On Nov 25, 2008, at 12:30 PM, Christopher Barker wrote: > > """ > missing : string, optional > A string representing a missing value, irrespective of the > column where it appears (e.g., ``'missing'`` or ``'unused'``. > """ > > It might be nice if "missing" could be a sequence of strings, if there > is more than one value for missing values, that are not clearly mapped > to a particular field. OK, easy enough. > """ > missing_values : {None, dictionary}, optional > A dictionary mapping a column number to a string indicating > whether the corresponding field should be masked. > """ > > would it possible to specify column header, rather than number here? A la mlab.csv2rec ? It could work with a bit more tweaking, basically following John Hunter's et al. path. What happens when the column names are unknown (read from the header) or wrong ? Actually, I'd like John to comment on that, hence the CC. More generally, wouldn't be useful to push the recarray manipulating functions from matplotlib.mlab to numpy ? From Chris.Barker at noaa.gov Tue Nov 25 13:30:32 2008 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 25 Nov 2008 10:30:32 -0800 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> Message-ID: <492C4448.2070907@noaa.gov> Pierre GM wrote: >> would it possible to specify column header, rather than number here? > > A la mlab.csv2rec ? I'll have to take a look at that. > following John Hunter's et al. path. What happens when the column > names are unknown (read from the header) or wrong ? well, my use case is that I don't know column numbers, but I do now column headers, and what "missing" value is associated with a give header. You have to know something! if the header is wrong, you get an error, though we may need to decide what "wrong" means. In my case, I'm dealing with data that has pre-specified headers (and I think missing values that go with them), but in any given file I don't know which of those columns is there. I want to read it in, and be able to query the result for what data it has. > Actually, I'd like John to comment on that, hence the CC. I don't see a CC ,but yes, it would be nice to get his input. > More > generally, wouldn't be useful to push the recarray manipulating > functions from matplotlib.mlab to numpy ? I think so -- or scipy. I 'd really like MPL to be about plotting, and only plotting. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From doutriaux1 at llnl.gov Tue Nov 25 13:55:54 2008 From: doutriaux1 at llnl.gov (=?UTF-8?Q?Charles_=D8=B3=D9=85=D9=8A=D8=B1_Doutriaux?=) Date: Tue, 25 Nov 2008 10:55:54 -0800 Subject: [Numpy-discussion] numpy.ma.sort failing with bus error In-Reply-To: <4F1FAC15-2369-4E00-B6B0-430E9E763963@gmail.com> References: <4F1FAC15-2369-4E00-B6B0-430E9E763963@gmail.com> Message-ID: Thx Pierre, don't worry about it it's not a show stopper at all C. On Nov 24, 2008, at 12:04 PM, Pierre GM wrote: > Charles, > Confirmed on my machine... > I gonna have to clean ma.sort, as there are indeed some temporaries > that probably don't need to be created. I must warn you however that I > won;t have a lot of time to spend on that in the next few days. In any > case, of course, I'll keep you posted. > Thx for reporting! > > > On Nov 24, 2008, at 12:03 PM, Charles ???? Doutriaux wrote: > >> i mistyped the second line of the sample failing script >> it should obviously read: >> a=numpy.ma.ones((16800,60,96),'f') >> not numpy.ma.sort((16800,60,96),'f') >> >> C. >> >> On Nov 24, 2008, at 8:40 AM, Charles ???? Doutriaux wrote: >> >>> Hello, >>> >>> Using numpy 1.2.1 on a mac os 10.5 >>> >>> >>> I admit the user was sort of stretching the limits but (on his >>> machine) >>> >>> import numpy >>> a=numpy.ones((16800,60,96),'f') >>> numpy.sort(a,axis=0) >>> >>> works >>> >>> import numpy.ma >>> a=numpy.ma.sort((16800,60,96),'f') >>> numpy.ma.sort(a,axis=0) >>> >>> failed with some malloc error: >>> python(435) malloc: *** mmap(size=2097152) failed (error code=12) >>> *** error: can't allocate region >>> *** set a breakpoint in malloc_error_break to debug >>> Bus error >>> >>> Since there's no mask I don't really see how much more memory it's >>> using. Beside changing 16800 to 15800 still fails (and now that >>> should >>> be using much less memory) >>> >>> Anyhow I would expect i nicer error than a bus error :) >>> >>> Thx, >>> >>> C> >>> >>> >>> >>> _______________________________________________ >>> Numpy-discussion mailing list >>> Numpy-discussion at scipy.org >>> http:// projects.scipy.org/mailman/listinfo/numpy-discussion >>> >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http:// projects.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http:// projects.scipy.org/mailman/listinfo/numpy-discussion From rmay31 at gmail.com Tue Nov 25 14:06:30 2008 From: rmay31 at gmail.com (Ryan May) Date: Tue, 25 Nov 2008 13:06:30 -0600 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> Message-ID: <492C4CB6.90604@gmail.com> Pierre GM wrote: > Ryan, > FYI, I've been coding over the last couple of weeks an extension of > loadtxt for a better support of masked data, with the option to read > column names in a header. Please find an example below (I also have > unittest). Most of the work is actually inspired from matplotlib's > mlab.csv2rec. It might be worth not duplicating efforts. > Cheers, > P. Absolutely! Definitely don't want to duplicate effort here. What I see here meets a lot of what I was looking for. Here are some questions: 1) It looks like the function returns a structured array rather than a rec array, so that fields are obtained by doing a dictionary access. Since it's a dictionary access, is there any reason that the header needs to be munged to replace characters and reserved names? IIUC, csv2rec changes names b/c it returns a rec array, which uses attribute lookup and hence all names need to be valid python identifiers. This is not the case for a structured array. 2) Can we avoid the use of seek() in here? I just posted a patch to change the check to readline, which was the only file function used previously. This allowed the direct use of a file-like object returned by urllib2.urlopen(). 3) In order to avoid breaking backwards compatibility, can we change to default for dtype to be float32, and instead use some kind of special value ('auto' ?) to use the automatic dtype determination? I'm currently cooking up some of these changes myself, but thought I would see what you thought first. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From pgmdevlist at gmail.com Tue Nov 25 14:25:17 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 25 Nov 2008 14:25:17 -0500 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <492C4CB6.90604@gmail.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C4CB6.90604@gmail.com> Message-ID: <8B81831A-483D-4AC0-9BA4-447CC4ED254C@gmail.com> On Nov 25, 2008, at 2:06 PM, Ryan May wrote: > > 1) It looks like the function returns a structured array rather than a > rec array, so that fields are obtained by doing a dictionary access. > Since it's a dictionary access, is there any reason that the header > needs to be munged to replace characters and reserved names? IIUC, > csv2rec changes names b/c it returns a rec array, which uses attribute > lookup and hence all names need to be valid python identifiers. > This is > not the case for a structured array. Personally, I prefer flexible ndarrays to recarrays, hence the output. However, I still think that names should be as clean as possible to avoid bad surprises down the road. > > 2) Can we avoid the use of seek() in here? I just posted a patch to > change the check to readline, which was the only file function used > previously. This allowed the direct use of a file-like object > returned > by urllib2.urlopen(). I coded that a couple of weeks ago, before you posted your patch and I didn't have tme to check it. Yes, we could try getting rid of seek. However, we need to find a way to rewind to the beginning of the file if the dtypes are not given in input (as we parsed the whole file to find the best converter in that case). > 3) In order to avoid breaking backwards compatibility, can we change > to > default for dtype to be float32, and instead use some kind of special > value ('auto' ?) to use the automatic dtype determination? I'm not especially concerned w/ backwards compatibility, because we're supporting masked values (something that np.loadtxt shouldn't have to worry about). Initially, I needed a replacement to the fromfile function in the scikits.timeseries.trecords package. I figured it'd be easier and more portable to get a function for generic masked arrays, that could be adapted afterwards to timeseries. In any case, I was more considering the functions I send you to be part of some numpy.ma.io module than a replacement to np.loadtxt. I tried to get the syntax as close as possible to np.loadtxt and mlab.csv2rec, but there'll always be some differences. So, yes, we could try to use a default dtype=float and yes, we could have an extra parameter 'auto'. But is it really that useful ? I'm not sure (well, no, I'm sure it's not...) > I'm currently cooking up some of these changes myself, but thought I > would see what you thought first. From jdh2358 at gmail.com Tue Nov 25 14:26:53 2008 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 25 Nov 2008 13:26:53 -0600 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> Message-ID: <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> On Tue, Nov 25, 2008 at 12:16 PM, Pierre GM wrote: > A la mlab.csv2rec ? It could work with a bit more tweaking, basically > following John Hunter's et al. path. What happens when the column names are > unknown (read from the header) or wrong ? > > Actually, I'd like John to comment on that, hence the CC. More generally, > wouldn't be useful to push the recarray manipulating functions from > matplotlib.mlab to numpy ? Yes, I've said on a number of occasions I'd like to see these functions in numpy, since a number of them make more sense as numpy methods than as stand alone functions. > What happens when the column names are unknown (read from the header) or wrong ? I'm not quite sure what you are looking for here. Either the user will have to know the correct column name or the column number or you should raise an error. I think supporting column names everywhere they make sense is critical since this is how most people think about these CSV-like files with column headers. One other thing that is essential for me is that date support is included. Virtually every CSV file I work with has date data in it, in a variety of formats, and I depend on csv2rec (via dateutil.parser.parse which mpl ships) to be able to handle it w/o any extra cognitive overhead, albeit at the expense of some performance overhead, but my files aren't too big. I'm not sure how numpy would handle the date parsing aspect, but this came up in the date datatype PEP discussion I think. For me, having to manually specify a date converter with the proper format string every time I load a CSV file is probably not viable. Another feature that is critical to me is to be able to get a np.recarray back instead of a record array. I use these all day long, and the convenience of r.date over r['date'] is too much for me to give up. Feel free to ignore these suggestions if they are too burdensome or not appropriate for numpy -- I'm just letting you know some of the things I need to see before I personally would stop using mlab.csv2rec and use numpy.loadtxt instead. One last thing, I consider the masked array support in csv2rec somewhat broken because when using a masked array you cannot get at the data (eg datetime methods or string methods) directly using the same interface that regular recarrays use. Pierre, last I brought this up you asked for some example code and indicated a willingness to work on it but I fell behind and never posted it. The code illustrating the problem is below. I'm really not sure what the right solution is, but the current implementation -- sometimes returning a plain-vanilla rec array, sometimes returning a masked record array -- with different interfaces is not good. Perhaps the best solution is to force the user to ask for masked support, and then always return a masked array whether any of the data is masked or not. csv2rec conditionally returns a masked array only if some of the data are masked, which makes it difficult to use. JDH Here is the problem I referred to above -- in f1 none of the rows are masked and so I can access the object attributes from the rows directly. In the 2nd example, row 3 has some missing data so I get an mrecords recarray back, which does not allow me to directly access the valid data methods. from StringIO import StringIO import matplotlib.mlab as mlab f1 = StringIO("""\ date,name,age,weight 2008-10-12,'Bill',22,125. 2008-10-13,'Tom',23,135. 2008-10-14,'Sally',23,145.""" ) r1 = mlab.csv2rec(f1) row0 = r1[0] print row0.date.year, row0.name.upper() f2 = StringIO("""\ date,name,age,weight 2008-10-12,'Bill',22,125. 2008-10-13,'Tom',23,135. 2008-10-14,'',,145.""" ) r2 = mlab.csv2rec(f2) row0 = r2[0] print row0.date.year, row0.name.upper() From rmay31 at gmail.com Tue Nov 25 14:37:42 2008 From: rmay31 at gmail.com (Ryan May) Date: Tue, 25 Nov 2008 13:37:42 -0600 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <8B81831A-483D-4AC0-9BA4-447CC4ED254C@gmail.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C4CB6.90604@gmail.com> <8B81831A-483D-4AC0-9BA4-447CC4ED254C@gmail.com> Message-ID: <492C5406.5040302@gmail.com> Pierre GM wrote: > On Nov 25, 2008, at 2:06 PM, Ryan May wrote: >> 1) It looks like the function returns a structured array rather than a >> rec array, so that fields are obtained by doing a dictionary access. >> Since it's a dictionary access, is there any reason that the header >> needs to be munged to replace characters and reserved names? IIUC, >> csv2rec changes names b/c it returns a rec array, which uses attribute >> lookup and hence all names need to be valid python identifiers. >> This is >> not the case for a structured array. > > Personally, I prefer flexible ndarrays to recarrays, hence the output. > However, I still think that names should be as clean as possible to > avoid bad surprises down the road. Ok, I'm not really partial to this, I just thought it would simplify. Your point is valid. >> 2) Can we avoid the use of seek() in here? I just posted a patch to >> change the check to readline, which was the only file function used >> previously. This allowed the direct use of a file-like object >> returned >> by urllib2.urlopen(). > > I coded that a couple of weeks ago, before you posted your patch and I > didn't have tme to check it. Yes, we could try getting rid of seek. > However, we need to find a way to rewind to the beginning of the file > if the dtypes are not given in input (as we parsed the whole file to > find the best converter in that case). What about doing the parsing and type inference in a loop and holding onto the already split lines? Then loop through the lines with the converters that were finally chosen? In addition to making my usecase work, this has the benefit of not doing the I/O twice. >> 3) In order to avoid breaking backwards compatibility, can we change >> to >> default for dtype to be float32, and instead use some kind of special >> value ('auto' ?) to use the automatic dtype determination? > > I'm not especially concerned w/ backwards compatibility, because we're > supporting masked values (something that np.loadtxt shouldn't have to > worry about). Initially, I needed a replacement to the fromfile > function in the scikits.timeseries.trecords package. I figured it'd be > easier and more portable to get a function for generic masked arrays, > that could be adapted afterwards to timeseries. In any case, I was > more considering the functions I send you to be part of some > numpy.ma.io module than a replacement to np.loadtxt. I tried to get > the syntax as close as possible to np.loadtxt and mlab.csv2rec, but > there'll always be some differences. > > So, yes, we could try to use a default dtype=float and yes, we could > have an extra parameter 'auto'. But is it really that useful ? I'm not > sure (well, no, I'm sure it's not...) I understand you're not concerned with backwards compatibility, but with the exception of missing handling, which is probably specific to masked arrays, I was hoping to just add functionality to loadtxt(). Numpy doesn't need a separate text reader for most of this and breaking API for any of this is likely a non-starter. So while, yes, having float be the default dtype is probably not the most useful, leaving it also doesn't break existing code. -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From pgmdevlist at gmail.com Tue Nov 25 15:01:17 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 25 Nov 2008 15:01:17 -0500 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> Message-ID: On Nov 25, 2008, at 2:26 PM, John Hunter wrote: > > Yes, I've said on a number of occasions I'd like to see these > functions in numpy, since a number of them make more sense as numpy > methods than as stand alone functions. Great. Could we think about getting that on for 1.3x, would you have time ? Or should we wait till early jan. ? > One other thing that is essential for me is that date support is > included. As I mentioned in an earlier post, I needed to get a replacement for a function in scikits.timeseries, where we do need dates, but I also needed something not too specific for numpy.ma. So I thought about extracting the conversion methods from the bulk of the function and create this new object, StringConverter, that takes care of the conversion. If you need to add date support, the simplest is to extend your StringConverter to take the date/datetime functions just after you import _preview (or numpy.ma.io if we go that path) >>> dateparser = dateutil.parser.parse >>> # Update the StringConvert mapper, so that date-like columns are automatically >>> # converted >>> _preview.StringConverter.mapper.insert(-1, (dateparser, datetime.date(2000, 1, 1))) That way, if a date is found i one of the column, it'll be converted appropiately. Seems to work pretty well for scikits.timeseries, I'll try to post that in the next couples of weeks (once I ironed out some of the numpy.ma bugs...) > Another feature that is critical to me is to be able to get a > np.recarray back instead of a record array. I use these all day long, > and the convenience of r.date over r['date'] is too much for me to > give up. No problem: just take a view once you got your output. I thought about adding yet another parameter that'd take care of that directly, but then we end up with far too many keywords... > > One last thing, I consider the masked array support in csv2rec > somewhat broken because when using a masked array you cannot get at > the data (eg datetime methods or string methods) directly using the > same interface that regular recarrays use. Well, it's more mrecords which is broken. I committed some fix a little while back, but it might not be very robust. I need to check that w/ your example. > Perhaps the best solution is to force the user to ask for masked > support, and then always return a masked array whether any of the data > is masked or not. csv2rec conditionally returns a masked array only > if some of the data are masked, which makes it difficult to use. Forcing to a flexible masked array would make quite sense if we pushed that function in numpy.ma.io. I don't think we should overload np.loadtxt too much anyway... On Nov 25, 2008, at 2:37 PM, Ryan May wrote: > > What about doing the parsing and type inference in a loop and holding > onto the already split lines? Then loop through the lines with the > converters that were finally chosen? In addition to making my usecase > work, this has the benefit of not doing the I/O twice. You mean, filling a list and relooping on it if we need to ? Sounds like a plan, but doesn't it create some extra temporaries we may not want ? > I understand you're not concerned with backwards compatibility, but > with > the exception of missing handling, which is probably specific to > masked > arrays, I was hoping to just add functionality to loadtxt(). Numpy > doesn't need a separate text reader for most of this and breaking API > for any of this is likely a non-starter. So while, yes, having > float be > the default dtype is probably not the most useful, leaving it also > doesn't break existing code. Depends on how we do it. We could have a modified np.loadtxt that takes some of the ideas of the file I send you (the StringConverter, for example), then I could have a numpy.ma.io that would take care of the missing data. And something in scikits.timeseries for the dates... The new np.loadtxt could use the default of the initial one, or we could create yet another function (np.loadfromtxt) that would match what I was suggesting, and np.loadtxt would be a special stripped downcase with dtype=float by default. thoughts? From rmay31 at gmail.com Tue Nov 25 15:13:56 2008 From: rmay31 at gmail.com (Ryan May) Date: Tue, 25 Nov 2008 14:13:56 -0600 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> Message-ID: <492C5C84.2050104@gmail.com> > On Nov 25, 2008, at 2:37 PM, Ryan May wrote: >> What about doing the parsing and type inference in a loop and holding >> onto the already split lines? Then loop through the lines with the >> converters that were finally chosen? In addition to making my usecase >> work, this has the benefit of not doing the I/O twice. > > You mean, filling a list and relooping on it if we need to ? Sounds > like a plan, but doesn't it create some extra temporaries we may not > want ? It shouldn't create any *extra* temporaries since we already make a list of lists before creating the final array. It just introduces an extra looping step. (I'd reuse the existing list of lists). > Depends on how we do it. We could have a modified np.loadtxt that > takes some of the ideas of the file I send you (the StringConverter, > for example), then I could have a numpy.ma.io that would take care of > the missing data. And something in scikits.timeseries for the dates... > > The new np.loadtxt could use the default of the initial one, or we > could create yet another function (np.loadfromtxt) that would match > what I was suggesting, and np.loadtxt would be a special stripped > downcase with dtype=float by default. > > thoughts? My personal opinion is that if it doesn't make loadtxt too unwieldly, to just add a few of the options to loadtxt() itself. I'm working on tweaking loadtxt() to add the auto dtype and the names, relying heavily on your StringConverter class (nice code btw.). If my understanding of StringConverter is correct, tweaking the new loadtxt for ma or timeseries would only require passing in modified versions of StringConverter. I'll post that when I'm done and we can see if it looks like too much functionality stapled together or not. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From pgmdevlist at gmail.com Tue Nov 25 15:28:46 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 25 Nov 2008 15:28:46 -0500 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <492C5C84.2050104@gmail.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> <492C5C84.2050104@gmail.com> Message-ID: >> It shouldn't create any *extra* temporaries since we already make a >> list > of lists before creating the final array. It just introduces an extra > looping step. (I'd reuse the existing list of lists). Cool then, go for it. > If my understanding of > StringConverter is correct, tweaking the new loadtxt for ma or > timeseries would only require passing in modified versions of > StringConverter. Nope, we still need to double check whether there's any missing data in any field of the line we process, independently of the conversion. So there must be some extra loop involved, and I'd need a special function in numpy.ma to take care of that. So our options are * create a new function in numpy.ma and leave np.loadtxt like that * write a new np.loadtxt incorporating most of the ideas of the code I send, but I'd still need to adapt it to support masked values. > > I'll post that when I'm done and we can see if it looks like too much > functionality stapled together or not. Sounds like a plan. Wouldn't mind getting more feedback from fellow users before we get too deep, however... From rmay31 at gmail.com Tue Nov 25 15:33:29 2008 From: rmay31 at gmail.com (Ryan May) Date: Tue, 25 Nov 2008 14:33:29 -0600 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> <492C5C84.2050104@gmail.com> Message-ID: <492C6119.2030301@gmail.com> Pierre GM wrote: > Nope, we still need to double check whether there's any missing data > in any field of the line we process, independently of the conversion. > So there must be some extra loop involved, and I'd need a special > function in numpy.ma to take care of that. So our options are > * create a new function in numpy.ma and leave np.loadtxt like that > * write a new np.loadtxt incorporating most of the ideas of the code I > send, but I'd still need to adapt it to support masked values. You couldn't run this loop on the array returned by np.loadtxt() (by masking on the appropriate fill value)? >> I'll post that when I'm done and we can see if it looks like too much >> functionality stapled together or not. > > Sounds like a plan. Wouldn't mind getting more feedback from fellow > users before we get too deep, however... Agreed. Anyone? -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From pgmdevlist at gmail.com Tue Nov 25 16:04:36 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 25 Nov 2008 16:04:36 -0500 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <492C6119.2030301@gmail.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> <492C5C84.2050104@gmail.com> <492C6119.2030301@gmail.com> Message-ID: On Nov 25, 2008, at 3:33 PM, Ryan May wrote: > > You couldn't run this loop on the array returned by np.loadtxt() (by > masking on the appropriate fill value)? Yet an extra loop... Doable, yes... But meh. From jdh2358 at gmail.com Tue Nov 25 16:56:08 2008 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 25 Nov 2008 15:56:08 -0600 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> Message-ID: <88e473830811251356t41218626t5dd5e62027418918@mail.gmail.com> On Tue, Nov 25, 2008 at 2:01 PM, Pierre GM wrote: > > On Nov 25, 2008, at 2:26 PM, John Hunter wrote: >> >> Yes, I've said on a number of occasions I'd like to see these >> functions in numpy, since a number of them make more sense as numpy >> methods than as stand alone functions. > > Great. Could we think about getting that on for 1.3x, would you have > time ? Or should we wait till early jan. ? I wasn't volunteering to do it, just that I support the migration if someone else wants to do it. I'm fully committed with mpl already... JDH From pgmdevlist at gmail.com Tue Nov 25 16:59:56 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 25 Nov 2008 16:59:56 -0500 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <88e473830811251356t41218626t5dd5e62027418918@mail.gmail.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> <88e473830811251356t41218626t5dd5e62027418918@mail.gmail.com> Message-ID: OK then, I'll take care of that over the next few weeks... On Nov 25, 2008, at 4:56 PM, John Hunter wrote: > On Tue, Nov 25, 2008 at 2:01 PM, Pierre GM > wrote: >> >> On Nov 25, 2008, at 2:26 PM, John Hunter wrote: >>> >>> Yes, I've said on a number of occasions I'd like to see these >>> functions in numpy, since a number of them make more sense as numpy >>> methods than as stand alone functions. >> >> Great. Could we think about getting that on for 1.3x, would you have >> time ? Or should we wait till early jan. ? > > I wasn't volunteering to do it, just that I support the migration if > someone else wants to do it. > > I'm fully committed with mpl already... > > JDH > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From oliphant at enthought.com Tue Nov 25 17:23:42 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 25 Nov 2008 16:23:42 -0600 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> Message-ID: <492C7AEE.2010700@enthought.com> John Hunter wrote: > On Tue, Nov 25, 2008 at 12:16 PM, Pierre GM wrote: > > >> A la mlab.csv2rec ? It could work with a bit more tweaking, basically >> following John Hunter's et al. path. What happens when the column names are >> unknown (read from the header) or wrong ? >> >> Actually, I'd like John to comment on that, hence the CC. More generally, >> wouldn't be useful to push the recarray manipulating functions from >> matplotlib.mlab to numpy ? >> > > Yes, I've said on a number of occasions I'd like to see these > functions in numpy, since a number of them make more sense as numpy > methods than as stand alone functions. > > John and I are in agreement here. The issue has remained somebody stepping up and doing the conversions (and fielding the questions and the resulting discussion) for the various routines that probably ought to go into NumPy. This would be a great place to get involved if there is a lurker looking for a project. -Travis From oliphant at enthought.com Tue Nov 25 17:24:21 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 25 Nov 2008 16:24:21 -0600 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> <88e473830811251356t41218626t5dd5e62027418918@mail.gmail.com> Message-ID: <492C7B15.6040506@enthought.com> Pierre GM wrote: > OK then, I'll take care of that over the next few weeks... > > Thanks Pierre. -Travis From pgmdevlist at gmail.com Tue Nov 25 17:31:47 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 25 Nov 2008 17:31:47 -0500 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <492C7B15.6040506@enthought.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> <88e473830811251356t41218626t5dd5e62027418918@mail.gmail.com> <492C7B15.6040506@enthought.com> Message-ID: Oh don't mention... However, I'd be quite grateful if you could give an eye to the pb of mixing np.scalars and 0d subclasses of ndarray: looks like it's a C pb, quite out of my league... http://scipy.org/scipy/numpy/ticket/826 http://article.gmane.org/gmane.comp.python.numeric.general/26354/match=priority+rules http://article.gmane.org/gmane.comp.python.numeric.general/25670/match=priority+rules On Nov 25, 2008, at 5:24 PM, Travis E. Oliphant wrote: > Pierre GM wrote: >> OK then, I'll take care of that over the next few weeks... >> >> > Thanks Pierre. > > -Travis > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From rmay31 at gmail.com Tue Nov 25 18:08:38 2008 From: rmay31 at gmail.com (Ryan May) Date: Tue, 25 Nov 2008 17:08:38 -0600 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> <492C5C84.2050104@gmail.com> Message-ID: <492C8576.70803@gmail.com> Pierre GM wrote: > Sounds like a plan. Wouldn't mind getting more feedback from fellow > users before we get too deep, however... Ok, I've attached, as a first cut, a diff against SVN HEAD that does (I think) what I'm looking for. It passes all of the old tests and passes my own quick test. A more rigorous test suite will follow, but I want this out the door before I need to leave for the day. What this changeset essentially does is just add support for automatic dtypes along with supplying/reading names for flexible dtypes. It leverages StringConverter heavily, using a few tweaks so that old behavior is kept. This is by no means a final version. Probably the biggest change from what I mentioned earlier is that instead of dtype='auto', I've used dtype=None to signal the detection code, since dtype=='auto' causes problems. I welcome any and all suggestions here, both on the code and on the original idea of adding these capabilities to loadtxt(). Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: loadtxt_changes.diff URL: From pgmdevlist at gmail.com Tue Nov 25 19:00:30 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 25 Nov 2008 19:00:30 -0500 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <492C8576.70803@gmail.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> <492C5C84.2050104@gmail.com> <492C8576.70803@gmail.com> Message-ID: <8904DEFF-3F89-4A23-88EA-CD2217254842@gmail.com> Ryan, Quick comments: * I already have some unittests for StringConverter, check the file I attach. * Your str2bool will probably mess things up in upgrade compared to the one JDH had written (the one I send you): you don't wanna use int(bool(value)), as it'll always give you 0 or 1 when you might need a ValueError * Your locked version of update won't probably work either, as you force the converter to output a string (you set the status to largest possible, that's the one that outputs strings). Why don't you set the status to the current one (make a tmp one if needed). * I'd probably get rid of StringConverter._get_from_dtype, as it is not needed outside the __init__. You may wanna stick to the original __init__. -------------- next part -------------- A non-text attachment was scrubbed... Name: test_preview.py Type: text/x-python-script Size: 4871 bytes Desc: not available URL: -------------- next part -------------- All, another question: What's the best way to have some kind of sandbox for code like the one Ryan is writing ? So that we can try it, modify it, without commiting anything to SVN yet ? On Nov 25, 2008, at 6:08 PM, Ryan May wrote: > Pierre GM wrote: >> Sounds like a plan. Wouldn't mind getting more feedback from >> fellow users before we get too deep, however... > > Ok, I've attached, as a first cut, a diff against SVN HEAD that does > (I think) what I'm looking for. It passes all of the old tests and > passes my own quick test. A more rigorous test suite will follow, > but I want this out the door before I need to leave for the day. > > What this changeset essentially does is just add support for > automatic dtypes along with supplying/reading names for flexible > dtypes. It leverages StringConverter heavily, using a few tweaks so > that old behavior is kept. This is by no means a final version. > > Probably the biggest change from what I mentioned earlier is that > instead of dtype='auto', I've used dtype=None to signal the > detection code, since dtype=='auto' causes problems. > > I welcome any and all suggestions here, both on the code and on the > original idea of adding these capabilities to loadtxt(). > > Ryan > > -- > Ryan May > Graduate Research Assistant > School of Meteorology > University of Oklahoma > Index: lib/io.py > =================================================================== > --- lib/io.py (revision 6099) > +++ lib/io.py (working copy) > @@ -233,29 +233,138 @@ > for name in todel: > os.remove(name) > > -# Adapted from matplotlib > +def _string_like(obj): > + try: obj + '' > + except (TypeError, ValueError): return False > + return True > > -def _getconv(dtype): > - typ = dtype.type > - if issubclass(typ, np.bool_): > - return lambda x: bool(int(x)) > - if issubclass(typ, np.integer): > - return lambda x: int(float(x)) > - elif issubclass(typ, np.floating): > - return float > - elif issubclass(typ, np.complex): > - return complex > +def str2bool(value): > + """ > + Tries to transform a string supposed to represent a boolean to > a boolean. > + > + Raises > + ------ > + ValueError > + If the string is not 'True' or 'False' (case independent) > + """ > + value = value.upper() > + if value == 'TRUE': > + return True > + elif value == 'FALSE': > + return False > else: > - return str > + return int(bool(value)) > > +class StringConverter(object): > + """ > + Factory class for function transforming a string into another > object (int, > + float). > > -def _string_like(obj): > - try: obj + '' > - except (TypeError, ValueError): return 0 > - return 1 > + After initialization, an instance can be called to transform a > string > + into another object. If the string is recognized as > representing a missing > + value, a default value is returned. > > + Parameters > + ---------- > + dtype : dtype, optional > + Input data type, used to define a basic function and a > default value > + for missing data. For example, when `dtype` is float, > the :attr:`func` > + attribute is set to ``float`` and the default value to > `np.nan`. > + missing_values : sequence, optional > + Sequence of strings indicating a missing value. > + > + Attributes > + ---------- > + func : function > + Function used for the conversion > + default : var > + Default value to return when the input corresponds to a > missing value. > + mapper : sequence of tuples > + Sequence of tuples (function, default value) to evaluate in > order. > + > + """ > + from numpy.core import nan # To avoid circular import > + mapper = [(str2bool, None), > + (lambda x: int(float(x)), -1), > + (float, nan), > + (complex, nan+0j), > + (str, '???')] > + > + def __init__(self, dtype=None, missing_values=None): > + if dtype is None: > + self.func = str2bool > + self.default = None > + self._status = 0 > + else: > + dtype = np.dtype(dtype).type > + self.func,self.default,self._status = > self._get_from_dtype(dtype) > + > + # Store the list of strings corresponding to missing values. > + if missing_values is None: > + self.missing_values = [] > + else: > + self.missing_values = set(list(missing_values) + ['']) > + > + def __call__(self, value): > + if value in self.missing_values: > + return self.default > + return self.func(value) > + > + def upgrade(self, value): > + """ > + Tries to find the best converter for `value`, by testing > different > + converters in order. > + The order in which the converters are tested is read from the > + :attr:`_status` attribute of the instance. > + """ > + try: > + self.__call__(value) > + except ValueError: > + _statusmax = len(self.mapper) > + if self._status == _statusmax: > + raise ValueError("Could not find a valid conversion > function") > + elif self._status < _statusmax - 1: > + self._status += 1 > + (self.func, self.default) = self.mapper[self._status] > + self.upgrade(value) > + > + def _get_from_dtype(self, dtype): > + """ > + Sets the :attr:`func` and :attr:`default` attributes for a > given dtype. > + """ > + dtype = np.dtype(dtype).type > + if issubclass(dtype, np.bool_): > + return (str2bool, 0, 0) > + elif issubclass(dtype, np.integer): > + return (lambda x: int(float(x)), -1, 1) > + elif issubclass(dtype, np.floating): > + return (float, np.nan, 2) > + elif issubclass(dtype, np.complex): > + return (complex, np.nan + 0j, 3) > + else: > + return (str, '???', -1) > + > + def update(self, func, default=None, locked=False): > + """ > + Sets the :attr:`func` and :attr:`default` attributes directly. > + > + Parameters > + ---------- > + func : function > + Conversion function. > + default : var, optional > + Default value to return when a missing value is encountered. > + locked : bool, optional > + Whether this should lock in the function so that no > upgrading is > + possible. > + """ > + self.func = func > + self.default = default > + if locked: > + self._status = len(self.mapper) > + > def loadtxt(fname, dtype=float, comments='#', delimiter=None, > converters=None, > - skiprows=0, usecols=None, unpack=False): > + skiprows=0, usecols=None, unpack=False, names=None): > """ > Load data from a text file. > > @@ -333,11 +442,10 @@ > fh = gzip.open(fname) > else: > fh = file(fname) > - elif hasattr(fname, 'seek'): > + elif hasattr(fname, 'readline'): > fh = fname > else: > raise ValueError('fname must be a string or file handle') > - X = [] > > def flatten_dtype(dt): > """Unpack a structured data-type.""" > @@ -359,10 +467,6 @@ > else: > return [] > > - # Make sure we're dealing with a proper dtype > - dtype = np.dtype(dtype) > - defconv = _getconv(dtype) > - > # Skip the first `skiprows` lines > for i in xrange(skiprows): > fh.readline() > @@ -377,37 +481,76 @@ > first_vals = split_line(first_line) > N = len(usecols or first_vals) > > - dtype_types = flatten_dtype(dtype) > - if len(dtype_types) > 1: > - # We're dealing with a structured array, each field of > - # the dtype matches a column > - converters = [_getconv(dt) for dt in dtype_types] > + # If names is True, read the field names from the first line > + if names == True: > + names = first_vals > + first_line = '' > + > + # Make sure we're dealing with a proper dtype > + if dtype is None: > + converters = [StringConverter() for i in xrange(N)] > else: > - # All fields have the same dtype > - converters = [defconv for i in xrange(N)] > + dtype = np.dtype(dtype) > + dtype_types = flatten_dtype(dtype) > + if len(dtype_types) > 1: > + # We're dealing with a structured array, each field of > + # the dtype matches a column > + converters = [StringConverter(dt) for dt in dtype_types] > + names = list(dtype.names) > + else: > + # All fields have the same dtype > + converters = [StringConverter(dtype) for i in xrange(N)] > > + # If usecols contains a list of names, convert them to column > indices > + if usecols and _string_like(usecols[0]): > + usecols = [names.index(_) for _ in usecols] > + > # By preference, use the converters specified by the user > for i, conv in (user_converters or {}).iteritems(): > + # If the converter is specified by column number, convert > it to an index > + if _string_like(i): > + i = names.index(i) > if usecols: > try: > i = usecols.index(i) > except ValueError: > # Unused converter specified > continue > - converters[i] = conv > + converters[i].update(conv, None) > > # Parse each line, including the first > + rows = [] > for i, line in enumerate(itertools.chain([first_line], fh)): > vals = split_line(line) > if len(vals) == 0: > continue > > if usecols: > - vals = [vals[i] for i in usecols] > + vals = [vals[_] for _ in usecols] > > - # Convert each value according to its column and store > - X.append(tuple([conv(val) for (conv, val) in > zip(converters, vals)])) > + if dtype is None: > + for converter, item in zip(converters, row): > + if len(item.strip()): > + converter.upgrade(item) > > + # Store the values > + rows.append(tuple(vals)) > + > + # Convert each value according to its column and store > + for i,vals in enumerate(rows): > + rows[i] = tuple([conv(val) for (conv, val) in > zip(converters, vals)]) > + > + #Construct final dtype if necessary > + if dtype is None: > + dtype_types = [np.array(val).dtype for val in rows[0]] > + uniform_dtype = all([dtype_types[0] == dt for dt in > dtype_types]) > + if uniform_dtype and not names: > + dtype = dtype_types[0] > + else: > + if not names: > + names = ['column_%d' for i in xrange(N)] > + dtype = zip(names, dtype_types) > + > if len(dtype_types) > 1: > # We're dealing with a structured array, with a dtype such as > # [('x', int), ('y', [('s', int), ('t', float)])] > @@ -416,16 +559,16 @@ > # [('x', int), ('s', int), ('t', float)] > # > # Then, view the array using the specified dtype. > - X = np.array(X, dtype=np.dtype([('', t) for t in > dtype_types])) > - X = X.view(dtype) > + rows = np.array(rows, dtype=np.dtype([('', t) for t in > dtype_types])) > + rows = rows.view(dtype) > else: > - X = np.array(X, dtype) > + rows = np.array(rows, dtype) > > - X = np.squeeze(X) > + rows = np.squeeze(rows) > if unpack: > - return X.T > + return rows.T > else: > - return X > + return rows > > > def savetxt(fname, X, fmt='%.18e',delimiter=' '): > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From charlesr.harris at gmail.com Tue Nov 25 21:14:55 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 25 Nov 2008 19:14:55 -0700 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <8904DEFF-3F89-4A23-88EA-CD2217254842@gmail.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> <492C5C84.2050104@gmail.com> <492C8576.70803@gmail.com> <8904DEFF-3F89-4A23-88EA-CD2217254842@gmail.com> Message-ID: On Tue, Nov 25, 2008 at 5:00 PM, Pierre GM wrote: > All, another question: > What's the best way to have some kind of sandbox for code like the one Ryan > is writing ? So that we can try it, modify it, without commiting anything to > SVN yet ? > Probably make a branch and do commits there. If you don't want to hassle with a merge, just copy the file over to the trunk when you are done and commit it from there, then remove the branch. Instructions on making branches are at http://projects.scipy.org/scipy/numpy/wiki/MakingBranches . Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From spacey-numpy-discussion at lenin.net Tue Nov 25 18:54:08 2008 From: spacey-numpy-discussion at lenin.net (Peter Norton) Date: Tue, 25 Nov 2008 18:54:08 -0500 Subject: [Numpy-discussion] Problems building numpy on solaris 10 x86 Message-ID: Back in the beginning of the summer, I jumped through a lot of hoops to build numpy+scipy on solaris, 64-bit with gcc. I received a lot of help from David C., and ended up, by some very ugly hacking, building an acceptable numpy+scipy+matplotlib trio for use at my company. However, I'm back at it again trying to build the same tools in both a 32-bit abi and a 64-bit ABI. I'm starting with the 32-bit build, because I suspect it'd be simpler (less trouble adding things like -m64 and other such flags). However, I've run into a very basic problem right at the get-go. This time instead of bothering David at the beginning of my build, I was hoping that other people may have experience to contribute to resolving my issues. Here is my build environment: 1) gcc-4.3.1 2) Solaris 10 update 3 3) sunperf libraries (for blas+lapack support) I can provide more detail since that's not a very specific list. Anyway, when I try building numpy-1.2.1 after setting up my site.cfg and build-related environment this is what I get: Setting the site.cfg Running from numpy source directory. F2PY Version 2_5972 non-existing path in 'numpy/core': 'code_generators/array_api_order.txt' [continues...] scons: Reading SConscript files ... scons: warning: Ignoring missing SConscript 'build/scons/numpy/core/SConscript' File "/usr/local/python-2.5.1/lib/python2.5/site-packages/numscons-0.9.4-py2.5.egg/numscons/core/numpyenv.py", line 108, in DistutilsSConscript scons: done reading SConscript files. scons: Building targets ... scons: *** [Errno 2] No such file or directory: 'numpy/core/../../build/scons/numpy/core/sconsign.dblite' scons: building terminated because of errors. error: Error while executing scons command. See above for more information. If you think it is a problem in numscons, you can also try executing the scons command with --log-level option for more detailed output of what numscons is doing, for example --log-level=0; the lowest the level is, the more detailed the output it. [etc.] then similar errors repeat themselves over and over including ignoreing missing SConscript, and no sconsign.dblite file, until the build bombs out. I've got numscons installed from pypi: >>> import numscons.version >>> numscons.version.VERSION '0.9.4' Can anyone get me on the right track here? Thanks, -Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Nov 25 21:26:20 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 25 Nov 2008 19:26:20 -0700 Subject: [Numpy-discussion] Problems building numpy on solaris 10 x86 In-Reply-To: References: Message-ID: On Tue, Nov 25, 2008 at 4:54 PM, Peter Norton < spacey-numpy-discussion at lenin.net> wrote: > Back in the beginning of the summer, I jumped through a lot of hoops to > build numpy+scipy on solaris, 64-bit with gcc. I received a lot of help from > David C., and ended up, by some very ugly hacking, building an acceptable > numpy+scipy+matplotlib trio for use at my company. > > However, I'm back at it again trying to build the same tools in both a > 32-bit abi and a 64-bit ABI. I'm starting with the 32-bit build, because I > suspect it'd be simpler (less trouble adding things like -m64 and other such > flags). However, I've run into a very basic problem right at the get-go. > This time instead of bothering David at the beginning of my build, I was > hoping that other people may have experience to contribute to resolving my > issues. > > Here is my build environment: > > 1) gcc-4.3.1 > 2) Solaris 10 update 3 > 3) sunperf libraries (for blas+lapack support) > > I can provide more detail since that's not a very specific list. > > Anyway, when I try building numpy-1.2.1 after setting up my site.cfg and > build-related environment this is what I get: > > > Setting the site.cfg > Running from numpy source directory. > F2PY Version 2_5972 > non-existing path in 'numpy/core': 'code_generators/array_api_order.txt' > [continues...] > scons: Reading SConscript files ... > > scons: warning: Ignoring missing SConscript > 'build/scons/numpy/core/SConscript' > File > "/usr/local/python-2.5.1/lib/python2.5/site-packages/numscons-0.9.4-py2.5.egg/numscons/core/numpyenv.py", > line 108, in DistutilsSConscript > scons: done reading SConscript files. > scons: Building targets ... > scons: *** [Errno 2] No such file or directory: > 'numpy/core/../../build/scons/numpy/core/sconsign.dblite' > scons: building terminated because of errors. > error: Error while executing scons command. See above for more information. > If you think it is a problem in numscons, you can also try executing the > scons > command with --log-level option for more detailed output of what numscons > is > doing, for example --log-level=0; the lowest the level is, the more > detailed > the output it. > [etc.] > > then similar errors repeat themselves over and over including ignoreing > missing SConscript, and no sconsign.dblite file, until the build bombs out. > > I've got numscons installed from pypi: > >>> import numscons.version > >>> numscons.version.VERSION > '0.9.4' > > Can anyone get me on the right track here? > What happens if you go the usual python setup.py {build,install} route? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmay31 at gmail.com Tue Nov 25 22:02:24 2008 From: rmay31 at gmail.com (Ryan May) Date: Tue, 25 Nov 2008 21:02:24 -0600 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <8904DEFF-3F89-4A23-88EA-CD2217254842@gmail.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> <492C5C84.2050104@gmail.com> <492C8576.70803@gmail.com> <8904DEFF-3F89-4A23-88EA-CD2217254842@gmail.com> Message-ID: <492CBC40.3010501@gmail.com> Pierre GM wrote: > Ryan, > Quick comments: > > * I already have some unittests for StringConverter, check the file I > attach. Ok, great. > * Your str2bool will probably mess things up in upgrade compared to the > one JDH had written (the one I send you): you don't wanna use > int(bool(value)), as it'll always give you 0 or 1 when you might need a > ValueError Ok, I wasn't sure. I was trying to merge what the old code used with the new str2bool you supplied. That's probably not all that necessary. > * Your locked version of update won't probably work either, as you force > the converter to output a string (you set the status to largest > possible, that's the one that outputs strings). Why don't you set the > status to the current one (make a tmp one if needed). Looking at the code, it looks like mapper is only used in the upgrade() method. My goal by setting status to the largest possible is to lock the converter to the supplied function. That way for the user supplied converters, the StringConverter doesn't try to upgrade away from it. My thinking was that if the user supplied converter function fails, the user should know. (Though I got this wrong the first time.) > * I'd probably get rid of StringConverter._get_from_dtype, as it is not > needed outside the __init__. You may wanna stick to the original __init__. Done. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From pgmdevlist at gmail.com Tue Nov 25 22:17:13 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 25 Nov 2008 22:17:13 -0500 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <492CBC40.3010501@gmail.com> References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> <492C5C84.2050104@gmail.com> <492C8576.70803@gmail.com> <8904DEFF-3F89-4A23-88EA-CD2217254842@gmail.com> <492CBC40.3010501@gmail.com> Message-ID: On Nov 25, 2008, at 10:02 PM, Ryan May wrote: > Pierre GM wrote: >> >> * Your locked version of update won't probably work either, as you >> force >> the converter to output a string (you set the status to largest >> possible, that's the one that outputs strings). Why don't you set the >> status to the current one (make a tmp one if needed). > > Looking at the code, it looks like mapper is only used in the > upgrade() > method. My goal by setting status to the largest possible is to lock > the > converter to the supplied function. That way for the user supplied > converters, the StringConverter doesn't try to upgrade away from > it. My > thinking was that if the user supplied converter function fails, the > user should know. (Though I got this wrong the first time.) > Then, define a _locked attribute in StringConverter, and prevent upgrade to run if self._locked is True. From rmay31 at gmail.com Tue Nov 25 22:23:54 2008 From: rmay31 at gmail.com (Ryan May) Date: Tue, 25 Nov 2008 21:23:54 -0600 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> <492C5C84.2050104@gmail.com> <492C8576.70803@gmail.com> <8904DEFF-3F89-4A23-88EA-CD2217254842@gmail.com> <492CBC40.3010501@gmail.com> Message-ID: <492CC14A.8090208@gmail.com> Pierre GM wrote: > On Nov 25, 2008, at 10:02 PM, Ryan May wrote: >> Pierre GM wrote: >>> * Your locked version of update won't probably work either, as you >>> force >>> the converter to output a string (you set the status to largest >>> possible, that's the one that outputs strings). Why don't you set the >>> status to the current one (make a tmp one if needed). >> Looking at the code, it looks like mapper is only used in the >> upgrade() >> method. My goal by setting status to the largest possible is to lock >> the >> converter to the supplied function. That way for the user supplied >> converters, the StringConverter doesn't try to upgrade away from >> it. My >> thinking was that if the user supplied converter function fails, the >> user should know. (Though I got this wrong the first time.) >> > > Then, define a _locked attribute in StringConverter, and prevent > upgrade to run if self._locked is True. Sure if you're into logic and sound design. I was going more for hackish and obtuse. (No seriously, I don't know why I didn't think of that.) Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From rmay31 at gmail.com Tue Nov 25 22:57:08 2008 From: rmay31 at gmail.com (Ryan May) Date: Tue, 25 Nov 2008 21:57:08 -0600 Subject: [Numpy-discussion] Minimum dtype Message-ID: <492CC914.2040009@gmail.com> Hi, I'm running on a 64-bit machine, and see the following: >numpy.array(64.6).dtype dtype('float64') >numpy.array(64).dtype dtype('int64') Is there any function/setting to make these default to 32-bit types except where necessary? I don't mean by specifying dtype=numpy.float32 or dtype=numpy.int32. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From robert.kern at gmail.com Tue Nov 25 22:58:34 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 25 Nov 2008 21:58:34 -0600 Subject: [Numpy-discussion] Minimum dtype In-Reply-To: <492CC914.2040009@gmail.com> References: <492CC914.2040009@gmail.com> Message-ID: <3d375d730811251958q11ae261fx62684e73b23add6b@mail.gmail.com> On Tue, Nov 25, 2008 at 21:57, Ryan May wrote: > Hi, > > I'm running on a 64-bit machine, and see the following: > > >numpy.array(64.6).dtype > dtype('float64') > > >numpy.array(64).dtype > dtype('int64') > > Is there any function/setting to make these default to 32-bit types > except where necessary? Nope. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Tue Nov 25 23:28:35 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 26 Nov 2008 13:28:35 +0900 Subject: [Numpy-discussion] Problems building numpy on solaris 10 x86 In-Reply-To: References: Message-ID: <492CD073.30505@ar.media.kyoto-u.ac.jp> Charles R Harris wrote: > > > What happens if you go the usual python setup.py {build,install} route? Won't go far since it does not handle sunperf. David From rmay31 at gmail.com Wed Nov 26 00:23:22 2008 From: rmay31 at gmail.com (Ryan May) Date: Tue, 25 Nov 2008 23:23:22 -0600 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: References: <059C95BB-80E4-4043-87CD-788AC22B9690@gmail.com> <492C3622.8020908@noaa.gov> <0EB1554B-AF4C-4023-A2A9-BE0D7EEF874F@gmail.com> <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> <492C5C84.2050104@gmail.com> <492C8576.70803@gmail.com> <8904DEFF-3F89-4A23-88EA-CD2217254842@gmail.com> <492CBC40.3010501@gmail.com> Message-ID: <492CDD4A.20304@gmail.com> Pierre GM wrote: > On Nov 25, 2008, at 10:02 PM, Ryan May wrote: >> Pierre GM wrote: >>> * Your locked version of update won't probably work either, as you >>> force >>> the converter to output a string (you set the status to largest >>> possible, that's the one that outputs strings). Why don't you set the >>> status to the current one (make a tmp one if needed). >> Looking at the code, it looks like mapper is only used in the >> upgrade() >> method. My goal by setting status to the largest possible is to lock >> the >> converter to the supplied function. That way for the user supplied >> converters, the StringConverter doesn't try to upgrade away from >> it. My >> thinking was that if the user supplied converter function fails, the >> user should know. (Though I got this wrong the first time.) Updated patch attached. This includes: * Updated docstring * New tests * Fixes for previous issues * Fixes to make new tests actually work I appreciate any and all feedback. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: loadtxt_changes.diff URL: From millman at berkeley.edu Wed Nov 26 01:04:49 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 25 Nov 2008 22:04:49 -0800 Subject: [Numpy-discussion] ANN: SciPy 0.7.0b1 (beta release) Message-ID: I'm pleased to announce the first beta release of SciPy 0.7.0. SciPy is a package of tools for science and engineering for Python. It includes modules for statistics, optimization, integration, linear algebra, Fourier transforms, signal and image processing, ODE solvers, and more. This beta release comes almost one year after the 0.6.0 release and contains many new features, numerous bug-fixes, improved test coverage, and better documentation. Please note that SciPy 0.7.0b1 requires Python 2.4 or greater and NumPy 1.2.0 or greater. For information, please see the release notes: http://sourceforge.net/project/shownotes.php?group_id=27747&release_id=642769 You can download the release from here: http://sourceforge.net/project/showfiles.php?group_id=27747&package_id=19531&release_id=642769 Thank you to everybody who contributed to this release. Enjoy, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From nadavh at visionsense.com Wed Nov 26 02:17:41 2008 From: nadavh at visionsense.com (Nadav Horesh) Date: Wed, 26 Nov 2008 09:17:41 +0200 Subject: [Numpy-discussion] 2D phase unwrapping Message-ID: <710F2847B0018641891D9A216027636029C346@ex3.envision.co.il> Is there a 2D phase unwrapping for python? I read a presentation by GERI (http://www.ljmu.ac.uk/GERI) that their code is implemented in scipy, but I could not find it. Nadav. From cournape at gmail.com Wed Nov 26 02:31:08 2008 From: cournape at gmail.com (David Cournapeau) Date: Wed, 26 Nov 2008 16:31:08 +0900 Subject: [Numpy-discussion] Problems building numpy on solaris 10 x86 In-Reply-To: References: Message-ID: <5b8d13220811252331r7b0c9cdat7887e769bb75a014@mail.gmail.com> On Wed, Nov 26, 2008 at 8:54 AM, Peter Norton wrote: > > scons: warning: Ignoring missing SConscript > 'build/scons/numpy/core/SConscript' > File > "/usr/local/python-2.5.1/lib/python2.5/site-packages/numscons-0.9.4-py2.5.egg/numscons/core/numpyenv.py", > line 108, in DistutilsSConscript > scons: done reading SConscript files. > scons: Building targets ... > scons: *** [Errno 2] No such file or directory: It could be considered a bug because the error message is bad: the problem really is the missing scons script (it is not so easy to handle because scons is in a different process than distutils, so it is difficult to get useful information back from the scons process). Which version of numpy are you using ? David From silva at lma.cnrs-mrs.fr Wed Nov 26 05:19:20 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Wed, 26 Nov 2008 11:19:20 +0100 Subject: [Numpy-discussion] 2D phase unwrapping In-Reply-To: <710F2847B0018641891D9A216027636029C346@ex3.envision.co.il> References: <710F2847B0018641891D9A216027636029C346@ex3.envision.co.il> Message-ID: <1227694760.2829.7.camel@localhost> Le mercredi 26 novembre 2008 ? 09:17 +0200, Nadav Horesh a ?crit : > Is there a 2D phase unwrapping for python? > I read a presentation by GERI (http://www.ljmu.ac.uk/GERI) that their code is implemented in scipy, but I could not find it. I had the same problem a couple of days ago! Playing with the unwrap function and the axis argument, I still did not managed to get rid of these *** lines! the kind of results I had are available at : http://fsilva.perso.ec-marseille.fr/visible/tmp/ - tmp00.png : no unwrapping at all - tmp10.png : unwrapping along the vertical axis - tmp11.png : unwrapping along the vertical axis and then unwrapping the first line and applying the 2pi gaps to all lines... - tmp20.png : unwrapping along the horizontal axis -- Fabrice Silva From mmetz at astro.uni-bonn.de Wed Nov 26 09:02:32 2008 From: mmetz at astro.uni-bonn.de (Manuel Metz) Date: Wed, 26 Nov 2008 15:02:32 +0100 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: References: Message-ID: <492D56F8.5010807@astro.uni-bonn.de> Ryan May wrote: > Hi, > > I have a couple more changes to loadtxt() that I'd like to code up in time > for 1.3, but I thought I should run them by the list before doing too much > work. These are already implemented in some fashion in > matplotlib.mlab.csv2rec(), but the code bases are different enough, that > pretty much only the idea can be lifted. All of these changes would be done > in a manner that is backwards compatible with the current API. > > 1) Support for setting the names of fields in the returned structured array > without using dtype. This can be a passed in list of names or reading the > names of fields from the first line of the file. Many files have a header > line that gives a name for each column. Adding this would obviously make > loadtxt much more general and allow for more generic code, IMO. My current > thinking is to add a *name* keyword parameter that defaults to None, for no > support for reading names. Setting it to True would tell loadtxt() to read > the names from the first line (after skiprows). The other option would be > to set names to a list of strings. > > 2) Support for automatic dtype inference. Instead of assuming all values > are floats, this would try a list of options until one worked. For strings, > this would keep track of the longest string within a given field before > setting the dtype. This would allow reading of files containing a mixture > of types much more easily, without having to go to the trouble of > constructing a full dtype by hand. This would work alongside any custom > converters one passes in. My current thinking of API would just be to add > the option of passing the string 'auto' as the dtype parameter. > > 3) Better support for missing values. The docstring mentions a way of > handling missing values by passing in a converter. The problem with this is > that you have to pass in a converter for *every column* that will contain > missing values. If you have a text file with 50 columns, writing this > dictionary of converters seems like ugly and needless boilerplate. I'm > unsure of how best to pass in both what values indicate missing values and > what values to fill in their place. I'd love suggestions Hi Ryan, this would be a great feature to have !!! One question: I have a datafile in ASCII format that uses a fixed width for each column. If no data if present, the space is left empty (see second row). What is the default behavior of the StringConverter class in this case? Does it ignore the empty entry by default? If so, what is the value in the array in this case? Is it nan? Example file: 1| 123.4| -123.4| 00.0 2| | 234.7| 12.2 Manuel > Here's an example of my use case (without 50 columns): > > ID,First Name,Last Name,Homework1,Homework2,Quiz1,Homework3,Final > 1234,Joe,Smith,85,90,,76, > 5678,Jane,Doe,65,99,,78, > 9123,Joe,Plumber,45,90,,92, > > Currently reading in this code requires a bit of boilerplace (declaring > dtypes, converters). While it's nothing I can't write, it still would be > easier to write it once within loadtxt and have it for everyone. > > Any support for *any* of these ideas? Any suggestions on how the user > should pass in the information? > > Thanks, > > Ryan > > > > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From jdh2358 at gmail.com Wed Nov 26 10:52:59 2008 From: jdh2358 at gmail.com (John Hunter) Date: Wed, 26 Nov 2008 09:52:59 -0600 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <492CDD4A.20304@gmail.com> References: <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> <492C5C84.2050104@gmail.com> <492C8576.70803@gmail.com> <8904DEFF-3F89-4A23-88EA-CD2217254842@gmail.com> <492CBC40.3010501@gmail.com> <492CDD4A.20304@gmail.com> Message-ID: <88e473830811260752o21ef08acud49548849b7a50ee@mail.gmail.com> On Tue, Nov 25, 2008 at 11:23 PM, Ryan May wrote: > Updated patch attached. This includes: > * Updated docstring > * New tests > * Fixes for previous issues > * Fixes to make new tests actually work > > I appreciate any and all feedback. I'm having trouble applying your patch, so I haven't tested yet, but do you (and do you want to) handle a case like this:: from StringIO import StringIO import matplotlib.mlab as mlab f1 = StringIO("""\ name age weight John 23 145. Harry 43 180.""") for line in f1: print line.split(' ') Ie, space delimited but using an irregular number of spaces? One place this comes up a lot is when the output files are actually fixed-width using spaces to line up the columns. One could count the columns to figure out the fixed widths and work with that, but it is much easier to simply assume space delimiting and handle the irregular number of spaces assuming one or more spaces is the delimiter. In csv2rec, we write a custom file object to handle this case. Apologies if you are already handling this and I missed it... JDH From spacey-numpy-discussion at lenin.net Wed Nov 26 11:16:39 2008 From: spacey-numpy-discussion at lenin.net (Peter Norton) Date: Wed, 26 Nov 2008 11:16:39 -0500 Subject: [Numpy-discussion] Problems building numpy on solaris 10 x86 In-Reply-To: <492CD073.30505@ar.media.kyoto-u.ac.jp> References: <492CD073.30505@ar.media.kyoto-u.ac.jp> Message-ID: On Tue, Nov 25, 2008 at 11:28 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Charles R Harris wrote: > > > > > > What happens if you go the usual python setup.py {build,install} route? > > Won't go far since it does not handle sunperf. > > David Even though the regular build process appears to complete, it seems to be doing the wrong thing. It seems, for instance, that lapack_lite.so is being built as an executable: nortonp at is6 11:14 ~ $ gnu file /usr/local/python-2.5.1/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so /usr/local/python-2.5.1/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), not stripped ??? -Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Wed Nov 26 13:05:07 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 26 Nov 2008 12:05:07 -0600 Subject: [Numpy-discussion] ANNOUNCE: EPD with Py2.5 version 4.0.30002 RC2 available for testing Message-ID: <492D8FD3.8050601@enthought.com> Hello, We've recently posted the beta1 build of EPD (the Enthought Python Distribution) with Python 2.5 version 4.1.30001 to the EPD website. You may download the beta from here: http://www.enthought.com/products/epdearlyaccess.php You can check out the release notes here: https://svn.enthought.com/epd/wiki/Python2.5.2/4.1.300/Beta1 Please help us test it out and provide feedback on the EPD Trac instance: https://svn.enthought.com/epd or via e-mail to epd-support at enthought.com. If everything goes well, we are planning a final release for December. About EPD --------- The Enthought Python Distribution (EPD) is a "kitchen-sink-included" distribution of the Python? Programming Language, including over 60 additional tools and libraries. The EPD bundle includes NumPy, SciPy, IPython, 2D and 3D visualization, database adapters, GUI building libraries, and a lot of other tools right out of the box. http://www.enthought.com/products/epd.php It is currently available as a single-click installer for Windows XP (x86), Mac OS X (a universal binary for OS X 10.4 and above), and RedHat 3 and 4 (x86 and amd64). EPD is free for academic use. An annual subscription and installation support are available for individual commercial use. Enterprise subscriptions with support for particular deployment environments are also available for commercial purchase. Enthought Build Team From gael.varoquaux at normalesup.org Wed Nov 26 14:32:46 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 26 Nov 2008 20:32:46 +0100 Subject: [Numpy-discussion] ANNOUNCE: EPD with Py2.5 version 4.0.30002 RC2 available for testing In-Reply-To: <492D8FD3.8050601@enthought.com> References: <492D8FD3.8050601@enthought.com> Message-ID: <20081126193246.GA21739@phare.normalesup.org> On Wed, Nov 26, 2008 at 12:05:07PM -0600, Travis E. Oliphant wrote: > We've recently posted the beta1 build of EPD (the Enthought Python > Distribution) with Python 2.5 version 4.1.30001 to the EPD website. You > may download the beta from here: > http://www.enthought.com/products/epdearlyaccess.php Congatulations for a quicker pace of releases. Ga?l From michael.abshoff at googlemail.com Wed Nov 26 17:12:00 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Wed, 26 Nov 2008 14:12:00 -0800 Subject: [Numpy-discussion] ANNOUNCE: EPD with Py2.5 version 4.0.30002 RC2 available for testing In-Reply-To: <492D8FD3.8050601@enthought.com> References: <492D8FD3.8050601@enthought.com> Message-ID: <492DC9B0.1030300@gmail.com> Travis E. Oliphant wrote: > Hello, Hi Travis, > It is currently available as a single-click installer for Windows XP > (x86), Mac OS X (a universal binary for OS X 10.4 and above), and > RedHat 3 and 4 (x86 and amd64). I am sure you mean RHEL 3 and 4? This "Redhat 3 and 4" always strikes me as vague :) > > Enthought Build Team Cheers, Michael > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From robert.kern at gmail.com Wed Nov 26 17:22:10 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 26 Nov 2008 16:22:10 -0600 Subject: [Numpy-discussion] 2D phase unwrapping In-Reply-To: <1227694760.2829.7.camel@localhost> References: <710F2847B0018641891D9A216027636029C346@ex3.envision.co.il> <1227694760.2829.7.camel@localhost> Message-ID: <3d375d730811261422p7d990acj98fce537611af199@mail.gmail.com> On Wed, Nov 26, 2008 at 04:19, Fabrice Silva wrote: > Le mercredi 26 novembre 2008 ? 09:17 +0200, Nadav Horesh a ?crit : >> Is there a 2D phase unwrapping for python? >> I read a presentation by GERI (http://www.ljmu.ac.uk/GERI) that their code is implemented in scipy, but I could not find it. > > I had the same problem a couple of days ago! Playing with the unwrap > function and the axis argument, I still did not managed to get rid of > these *** lines! 2D phase unwrapping is a very tricky problem, particularly if you have noise. I don't expect that you will have much success just applying the 1D unwrap in various ways. The algorithms are fairly sophisticated. Links to the GERI C++ code appear to be here: http://www.ljmu.ac.uk/GERI/90207.htm You do have to click through a restrictive license, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rmay31 at gmail.com Wed Nov 26 17:51:11 2008 From: rmay31 at gmail.com (Ryan May) Date: Wed, 26 Nov 2008 16:51:11 -0600 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <88e473830811260752o21ef08acud49548849b7a50ee@mail.gmail.com> References: <88e473830811251126v127ef295g94dd24089779180f@mail.gmail.com> <492C5C84.2050104@gmail.com> <492C8576.70803@gmail.com> <8904DEFF-3F89-4A23-88EA-CD2217254842@gmail.com> <492CBC40.3010501@gmail.com> <492CDD4A.20304@gmail.com> <88e473830811260752o21ef08acud49548849b7a50ee@mail.gmail.com> Message-ID: <492DD2DF.3020006@gmail.com> John Hunter wrote: > On Tue, Nov 25, 2008 at 11:23 PM, Ryan May wrote: > >> Updated patch attached. This includes: >> * Updated docstring >> * New tests >> * Fixes for previous issues >> * Fixes to make new tests actually work >> >> I appreciate any and all feedback. > > I'm having trouble applying your patch, so I haven't tested yet, but > do you (and do you want to) handle a case like this:: > > from StringIO import StringIO > import matplotlib.mlab as mlab > f1 = StringIO("""\ > name age weight > John 23 145. > Harry 43 180.""") > > for line in f1: > print line.split(' ') > > > Ie, space delimited but using an irregular number of spaces? One > place this comes up a lot is when the output files are actually > fixed-width using spaces to line up the columns. One could count the > columns to figure out the fixed widths and work with that, but it is > much easier to simply assume space delimiting and handle the irregular > number of spaces assuming one or more spaces is the delimiter. In > csv2rec, we write a custom file object to handle this case. > > Apologies if you are already handling this and I missed it... I think line.split(None) handles this case, so *in theory* passing delimiter=None would do it. I *am* interested in this case, so I'll have to give it a try when I get a chance. (I sense this is the same case as Manuel just asked about.) Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From rmay31 at gmail.com Wed Nov 26 17:55:47 2008 From: rmay31 at gmail.com (Ryan May) Date: Wed, 26 Nov 2008 16:55:47 -0600 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <492D56F8.5010807@astro.uni-bonn.de> References: <492D56F8.5010807@astro.uni-bonn.de> Message-ID: <492DD3F3.2000808@gmail.com> Manuel Metz wrote: > Ryan May wrote: >> 3) Better support for missing values. The docstring mentions a way of >> handling missing values by passing in a converter. The problem with this is >> that you have to pass in a converter for *every column* that will contain >> missing values. If you have a text file with 50 columns, writing this >> dictionary of converters seems like ugly and needless boilerplate. I'm >> unsure of how best to pass in both what values indicate missing values and >> what values to fill in their place. I'd love suggestions > > Hi Ryan, > this would be a great feature to have !!! Thanks for the support! > One question: I have a datafile in ASCII format that uses a fixed width > for each column. If no data if present, the space is left empty (see > second row). What is the default behavior of the StringConverter class > in this case? Does it ignore the empty entry by default? If so, what is > the value in the array in this case? Is it nan? > > Example file: > > 1| 123.4| -123.4| 00.0 > 2| | 234.7| 12.2 > I don't think this is so much anything to do with StringConverter, but more to do with how to split lines. Maybe we should add an option that, instead of simply specifying characters that delimit the fields, allows one to pass a custom function to split lines? That could either be done by overriding `delimiter` or by adding a new option like `splitter` I'll have to give that some thought. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From pgmdevlist at gmail.com Wed Nov 26 18:16:04 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 26 Nov 2008 18:16:04 -0500 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <492DD3F3.2000808@gmail.com> References: <492D56F8.5010807@astro.uni-bonn.de> <492DD3F3.2000808@gmail.com> Message-ID: <05281EC6-7792-4891-845C-4927B5643E2B@gmail.com> On Nov 26, 2008, at 5:55 PM, Ryan May wrote: > Manuel Metz wrote: >> Ryan May wrote: >>> 3) Better support for missing values. The docstring mentions a >>> way of >>> handling missing values by passing in a converter. The problem >>> with this is >>> that you have to pass in a converter for *every column* that will >>> contain >>> missing values. If you have a text file with 50 columns, writing >>> this >>> dictionary of converters seems like ugly and needless >>> boilerplate. I'm >>> unsure of how best to pass in both what values indicate missing >>> values and >>> what values to fill in their place. I'd love suggestions >> >> Hi Ryan, >> this would be a great feature to have !!! About missing values: * I don't think missing values should be supported in np.loadtxt. That should go into a specific np.ma.io.loadtxt function, a preview of which I posted earlier. I'll modify it taking Ryan's new function into account, and Chrisopher's suggestion (defining a dictionary {column name : missing values}. * StringConverter already defines some default filling values for each dtype. In np.ma.io.loadtxt, these values can be overwritten. Note that you should also be able to define a filling value by specifying a converter (think float(x or 0) for example) * Missing values on space-separated fields are very tricky to handle: take a line like "a,,,d". With a comma as separator, it's clear that the 2nd and 3rd fields are missing. Now, imagine that commas are actually spaces ( "a d"): 'd' is now seen as the 2nd field of a 2-field record, not as the 4th field of a 4- field record with 2 missing values. I thought about it, and kicked in touch * That said, there should be a way to deal with fixed-length fields, probably by taking consecutive slices of the initial string. That way, we should be able to keep track of missing data... > From ighalp at gmail.com Wed Nov 26 21:38:33 2008 From: ighalp at gmail.com (igor Halperin) Date: Wed, 26 Nov 2008 21:38:33 -0500 Subject: [Numpy-discussion] numpy errors when importing in Picalo Message-ID: Hi, I get numpy errors after I install Picalo (www.picalo.org) on Mac OS X 10.4.11 Tiger. I have tried to import numpy in Picalo using the instructions in PicaloCookBook, p.101. I get this error message which I don't understand. Per Picalo author (see below for his reply to my email to Picalo discussion forum), I try it here. I use numpy v. 1.0.4. distributed with Scipy superpack ( http://macinscience.org/?page_id=6) Could anyone please help? Thanks, and cheers Igor sys.path.append('/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/') import numpy Traceback (most recent call last): File "", line 1, in File "/Applications/sage/local/lib/python2.5/site-packages/numpy/__init__.py", line 93, in File "/Applications/sage/local/lib/python2.5/site-packages/numpy/add_newdocs.py", line 9, in File "/Applications/sage/local/lib/python2.5/site-packages/numpy/lib/__init__.py", line 4, in File "/Applications/sage/local/lib/python2.5/site-packages/numpy/lib/type_check.py", line 8, in File "/Applications/sage/local/lib/python2.5/site-packages/numpy/core/__init__.py", line 5, in ImportError: dlopen(/Applications/sage/local/lib/python2.5/site-packages/numpy/core/multiarray.so, 2): Symbol not found: _PyUnicodeUCS4_FromUnicode Referenced from: /Applications/sage/local/lib/python2.5/site-packages/numpy/core/multiarray.so Expected in: dynamic lookup Reply Forward Conan C. Albrecht to me, users show details Nov 23 (3 days ago) Reply You're doing everything right from my perspective. It looks like a problem with NumPy. The stack trace goes to multiarray.so in their core toolkit. I think you should hit their forums and see if they can help. One idea is that Picalo uses unicode for all data values. Perhaps numpy can't handle unicode? -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.abshoff at googlemail.com Wed Nov 26 21:45:43 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Wed, 26 Nov 2008 18:45:43 -0800 Subject: [Numpy-discussion] numpy errors when importing in Picalo In-Reply-To: References: Message-ID: <492E09D7.70306@gmail.com> igor Halperin wrote: > Hi, Hi > I get numpy errors after I install Picalo (www.picalo.org > ) on Mac OS X 10.4.11 Tiger. I have tried to > import numpy in Picalo using the instructions in PicaloCookBook, p.101. > I get this error message which I don't understand. > Per Picalo author (see below for his reply to my email to Picalo > discussion forum), I try it here. > > I use numpy v. 1.0.4. distributed with Scipy superpack > (http://macinscience.org/?page_id=6) > > Could anyone please help? The problem is that numpy was build using a python that was build with ucs4 (it is a unicode thing) while the python you run (I assume the Apple one) is ucs2. To fix this either build your own numpy or get a binary one that is ucs2, but I have no clue where one would get such a thing. > Thanks, and cheers > Igor Cheers, Michael > sys.path.append('/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/') > import numpy > Traceback (most recent call last): > File "", line 1, in > File > "/Applications/sage/local/lib/python2.5/site-packages/numpy/__init__.py", > line 93, in > File > "/Applications/sage/local/lib/python2.5/site-packages/numpy/add_newdocs.py", > line 9, in > File > "/Applications/sage/local/lib/python2.5/site-packages/numpy/lib/__init__.py", > line 4, in > File > "/Applications/sage/local/lib/python2.5/site-packages/numpy/lib/type_check.py", > line 8, in > File > "/Applications/sage/local/lib/python2.5/site-packages/numpy/core/__init__.py", > line 5, in > ImportError: > dlopen(/Applications/sage/local/lib/python2.5/site-packages/numpy/core/multiarray.so, > 2): Symbol not found: _PyUnicodeUCS4_FromUnicode > Referenced from: > /Applications/sage/local/lib/python2.5/site-packages/numpy/core/multiarray.so > Expected in: dynamic lookup > > Reply > > Forward > > > > > Conan C. Albrecht > > to me, users > > > show details Nov 23 (3 days ago) [smime.p7s] > > > Reply > > > > You're doing everything right from my perspective. It looks like a > problem with NumPy. The stack trace goes to multiarray.so in their core > toolkit. I think you should hit their forums and see if they can help. > > One idea is that Picalo uses unicode for all data values. Perhaps numpy > can't handle unicode? > > > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From millman at berkeley.edu Wed Nov 26 23:15:12 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 26 Nov 2008 20:15:12 -0800 Subject: [Numpy-discussion] 2D phase unwrapping In-Reply-To: <710F2847B0018641891D9A216027636029C346@ex3.envision.co.il> References: <710F2847B0018641891D9A216027636029C346@ex3.envision.co.il> Message-ID: On Tue, Nov 25, 2008 at 11:17 PM, Nadav Horesh wrote: > I read a presentation by GERI (http://www.ljmu.ac.uk/GERI) that their code is implemented in scipy, but I could not find it. One of my colleagues has been using 2D and 3D phase unwrapping code from Munther Gdeisat from GERI: https://cirl.berkeley.edu/trac/browser/bic/trunk/recon-tools/src https://cirl.berkeley.edu/trac/browser/bic/trunk/recon-tools/root/recon/punwrap This code is very high quality and replicating it from scratch would be a fairly daunting task. I was hoping to get this code integrated into SciPy, but no one in my group has had time to do this. Munther Gdeisat and I spoke on the phone and had an email exchange about relicensing his code and integrating it into SciPy. Munther was very interested in having this happen and had some discussions with the Institute Director to get permission for relicencing the code. I have appended our email exchange below. If anyone is interested in picking this up and going through the effort of incorporating this code in scipy I would be happy to help resolve any remaining licensing issues. I also may be able to devote some programming resources to helping out, if someone else volunteers to do the majority of the work. Thanks, ---------- Forwarded message ---------- From: Gdeisat, Munther Date: Fri, Sep 28, 2007 at 1:07 PM Subject: RE: 3D phase unwrap To: Jarrod Millman Cc: Daniel Sheltraw , "Travis E. Oliphant" Dear Jarrod, On behalf of the General Engineering Research Institute (GERI), Liverpool John Moores University, UK, I am very happy to license our 2D and 3D phase unwrappers to use in your NumPy and SciPy libraries. I spoke with this matter with the director of our institute (GERI), prof. Burton, and he is also happy to license the code for both libraries mentioned above. But myself and Prof. Burton would like to stress on the following issues 1- We disclaims all responsibility for the use which is made of the Software. We further disclaim any liability for the outcomes arising from using the Software. 2-We are not obliged to update the software or give any support to the users of the software. We generally help researchers around the world but we are not obliged to do that. Following our phone call, you mentioned to me that you already have these two points mentioned in the license of both libraries. So, I can confirm you that you can include our software in your library. Yours Truly, Dr. Munther Gdeisat The General Engineering Research Institute (GERI) Liverpool John Moores University, UK ________________________________ From: millman.ucb at gmail.com on behalf of Jarrod Millman Sent: Fri 9/28/2007 9:54 PM To: Gdeisat, Munther Cc: Daniel Sheltraw; Travis E. Oliphant Subject: Re: 3D phase unwrap Hello Munther, It was good to speak to you on the phone. I am happy that you will be able to relicense your code for us. Here is the license we use: http://projects.scipy.org/scipy/scipy/browser/trunk/LICENSE.txt It should address all your concerns. Feel free to let me know if you have any questions about it. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ________________________________ From: millman.ucb at gmail.com on behalf of Jarrod Millman Sent: Fri 9/28/2007 3:02 AM To: Gdeisat, Munther Cc: Daniel Sheltraw; Travis E. Oliphant Subject: Re: 3D phase unwrap On 9/26/07, Gdeisat, Munther wrote: > Firstly, I would like to than Daniel to bring us together. I am happy to include the 2D and 3D phase unwrappers in the NumPy/SciPy project. If you need any help regarding this matter such as documentation, I am happy to do so. Kind regards. Hello Munther, I am very excited about the possibility of getting your 2D and 3D phase unwrappers incorporated into SciPy (http://www.scipy.org/ ). Travis Oliphant (the main author of NumPy and a major contributor to SciPy) spoke about where your phase unwrapping coding would best fit, and we both agreed that they belong in SciPy. NumPy and SciPy are both part of the same technology stack. We try to keep NumPy as lean as possible leaving SciPy to provide a more comprehensive set of tools. Here is an article about NumPy/SciPy written by Travis from a recent special issue of IEEE's Computing in Science and Engineering, which was devoted to Python for scientific programming: http://www.computer.org/portal/cms_docs_cise/cise/2007/n3/10-20.pdf Anyway, I am the current release manager of SciPy and am eager to get your phase unwrappers incorporated ASAP. Phase unwrapping is currently missing from SciPy and Daniel has spoken very highly of your algorithms and code. The only potential issue I see involves the licensing. Both SciPy and NumPy are released under a revised BSD license. Your code appears to be owned by the Liverpool John Moores University and the licensing terms impose restrictions that prevent it from being incorporated in SciPy. If you could get the code relicensed with a revised BSD (or MIT) license, that would allow us to use your code. You would still be the author and would retain the copyright of your code. I would be happy to talk with you in more detail about these licensing issues and am very hopeful that you will be able to have the code relicensed. Please let me know if you have any questions. If you want to try and resolve these issues over the phone, my cellphone number is 510-851-0682. If you would like to speak with both Travis and me, we could try setting up a conference call using Skype. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From cournape at gmail.com Wed Nov 26 23:31:40 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 27 Nov 2008 13:31:40 +0900 Subject: [Numpy-discussion] Problems building numpy on solaris 10 x86 In-Reply-To: References: <492CD073.30505@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220811262031n35e3e12ekfc970db41e1ae080@mail.gmail.com> On Thu, Nov 27, 2008 at 1:16 AM, Peter Norton wrote: > > > On Tue, Nov 25, 2008 at 11:28 PM, David Cournapeau > wrote: >> >> Charles R Harris wrote: >> > >> > >> > What happens if you go the usual python setup.py {build,install} route? >> >> Won't go far since it does not handle sunperf. >> >> David > > > Even though the regular build process appears to complete, it seems to be > doing the wrong thing. It seems, for instance, that lapack_lite.so is being > built as an executable: > > nortonp at is6 11:14 ~ $ gnu file > /usr/local/python-2.5.1/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so > /usr/local/python-2.5.1/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so: > ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked > (uses shared libs), not stripped > ??? I think this is "expected" if python was built with one compiler and numpy with another (python with Forte and numpy with gcc). Distutils knows the options from python itself, wether it is optional in numscons (in theory, you can set it up to use python options or known configurations). I don't think you will have much hope with distutils, unless you are ready to add code by yourself (sunperf will be very difficult to support, though). The numscons error has nothing to do with solaris, the scons scripts should be there. Could you give me the full output of python setupscons.py scons ? David From michael.abshoff at googlemail.com Wed Nov 26 23:38:59 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Wed, 26 Nov 2008 20:38:59 -0800 Subject: [Numpy-discussion] Problems building numpy on solaris 10 x86 In-Reply-To: <5b8d13220811262031n35e3e12ekfc970db41e1ae080@mail.gmail.com> References: <492CD073.30505@ar.media.kyoto-u.ac.jp> <5b8d13220811262031n35e3e12ekfc970db41e1ae080@mail.gmail.com> Message-ID: <492E2463.90104@gmail.com> David Cournapeau wrote: > On Thu, Nov 27, 2008 at 1:16 AM, Peter Norton > wrote: >> >> On Tue, Nov 25, 2008 at 11:28 PM, David Cournapeau >> wrote: >>> Charles R Harris wrote: >>>> >>>> What happens if you go the usual python setup.py {build,install} route? >>> Won't go far since it does not handle sunperf. >>> >>> David >> >> Even though the regular build process appears to complete, it seems to be >> doing the wrong thing. It seems, for instance, that lapack_lite.so is being >> built as an executable: >> >> nortonp at is6 11:14 ~ $ gnu file >> /usr/local/python-2.5.1/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so >> /usr/local/python-2.5.1/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so: >> ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked >> (uses shared libs), not stripped >> ??? Hi, > I think this is "expected" if python was built with one compiler and > numpy with another (python with Forte and numpy with gcc). Distutils > knows the options from python itself, wether it is optional in > numscons (in theory, you can set it up to use python options or known > configurations). Hmm, I have recently build numpy 1.2.1 on FreeBSD 7 and had trouble with lapacK_lite.so. The fix was to add a "-shared" flag. I needed the same fix for Cygwin. > I don't think you will have much hope with distutils, unless you are > ready to add code by yourself (sunperf will be very difficult to > support, though). Why? What do you think makes sunperf problematic? [Not that I want to do the work, just curious :)] > The numscons error has nothing to do with solaris, > the scons scripts should be there. Could you give me the full output > of python setupscons.py scons ? > > David Cheers, Michael > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From sjtu_yh at yahoo.com Wed Nov 26 23:38:36 2008 From: sjtu_yh at yahoo.com (yunzhi cheng) Date: Wed, 26 Nov 2008 20:38:36 -0800 (PST) Subject: [Numpy-discussion] unsubscirpt Message-ID: <120071.39316.qm@web36203.mail.mud.yahoo.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Nov 26 23:47:42 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 27 Nov 2008 13:47:42 +0900 Subject: [Numpy-discussion] Problems building numpy on solaris 10 x86 In-Reply-To: <492E2463.90104@gmail.com> References: <492CD073.30505@ar.media.kyoto-u.ac.jp> <5b8d13220811262031n35e3e12ekfc970db41e1ae080@mail.gmail.com> <492E2463.90104@gmail.com> Message-ID: <5b8d13220811262047q4697cdb6v3b7ed9ce54831abb@mail.gmail.com> On Thu, Nov 27, 2008 at 1:38 PM, Michael Abshoff wrote: > > Why? What do you think makes sunperf problematic? [Not that I want to do > the work, just curious :)] I *know* it will be difficult :) The problem of sunperf is that you cannot just link a few libraries to make it work, you need to use compiler specific options like -xlic_lib=sunperf + some compiler options like align and co. Worse, at least of the versions I tried, the option does not work for shared libraries (when using the -G option). So using it with gcc is complicated. The only reason why it works in numscons is because there is a workaround ala autoconf which links sunperf to a dummy main, and I added a small linker parser which parse the output of verbose link step to get the options dynamically: http://bazaar.launchpad.net/%7Edavid-ar/numpy.scons.support/0.9/annotate/314?file_id=misc.py-20080116113453-hssst2gc3fs30vre-1 In theory, it could be added with distutils. Not that I will do it myself either, though. David From pgmdevlist at gmail.com Thu Nov 27 00:27:19 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 27 Nov 2008 00:27:19 -0500 Subject: [Numpy-discussion] What happened to numpy-docs ? Message-ID: All, I'd like to update routines.ma.rst on the numpy/numpy-docs/trunk SVN, but the whole trunk seems to be MIA... Where has it gone ? How can I (where should I) commit changes ? Thx in advance. P. From robert.kern at gmail.com Thu Nov 27 00:32:02 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 26 Nov 2008 23:32:02 -0600 Subject: [Numpy-discussion] What happened to numpy-docs ? In-Reply-To: References: Message-ID: <3d375d730811262132w1acb3512reb8b34a0a0bfdc6a@mail.gmail.com> On Wed, Nov 26, 2008 at 23:27, Pierre GM wrote: > All, > I'd like to update routines.ma.rst on the numpy/numpy-docs/trunk SVN, > but the whole trunk seems to be MIA... Where has it gone ? How can I > (where should I) commit changes ? It got moved into the numpy trunk under docs/. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Thu Nov 27 00:51:16 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 27 Nov 2008 14:51:16 +0900 Subject: [Numpy-discussion] What happened to numpy-docs ? In-Reply-To: <3d375d730811262132w1acb3512reb8b34a0a0bfdc6a@mail.gmail.com> References: <3d375d730811262132w1acb3512reb8b34a0a0bfdc6a@mail.gmail.com> Message-ID: <5b8d13220811262151k75658c81g47c7311f02ae9c0c@mail.gmail.com> On Thu, Nov 27, 2008 at 2:32 PM, Robert Kern wrote: > On Wed, Nov 26, 2008 at 23:27, Pierre GM wrote: >> All, >> I'd like to update routines.ma.rst on the numpy/numpy-docs/trunk SVN, >> but the whole trunk seems to be MIA... Where has it gone ? How can I >> (where should I) commit changes ? > > It got moved into the numpy trunk under docs/. While we are speaking about the moved docs: is it decided how we will distribute it ? For now, it is not included in the generated tarball, but I was wondering how we should distribute it (before, it went into .../site-packages/numpy/doc). Distutils does not have the notion of an installed doc outside the package itself, right ? cheers, David From pgmdevlist at gmail.com Thu Nov 27 01:13:19 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 27 Nov 2008 01:13:19 -0500 Subject: [Numpy-discussion] What happened to numpy-docs ? In-Reply-To: <3d375d730811262132w1acb3512reb8b34a0a0bfdc6a@mail.gmail.com> References: <3d375d730811262132w1acb3512reb8b34a0a0bfdc6a@mail.gmail.com> Message-ID: On Nov 27, 2008, at 12:32 AM, Robert Kern wrote: > On Wed, Nov 26, 2008 at 23:27, Pierre GM wrote: >> All, >> I'd like to update routines.ma.rst on the numpy/numpy-docs/trunk SVN, >> but the whole trunk seems to be MIA... Where has it gone ? How can I >> (where should I) commit changes ? > > It got moved into the numpy trunk under docs/. Duh... Guess I fell right at the time of the change. Robert, thx a lot! Pauli, do you think you could put your numpyext in the doc/ directory as well ? Cheers, P. From nadavh at visionsense.com Thu Nov 27 01:36:34 2008 From: nadavh at visionsense.com (Nadav Horesh) Date: Thu, 27 Nov 2008 08:36:34 +0200 Subject: [Numpy-discussion] 2D phase unwrapping References: <710F2847B0018641891D9A216027636029C346@ex3.envision.co.il> Message-ID: <710F2847B0018641891D9A216027636029C34C@ex3.envision.co.il> My problem is how to calculate the power flow (free-space Poynting vector) given an image of a complex scalar electric-field image. This requires to calculate the *derivative* of the phase, and I think I found a way to do it directly bypassing phase unwrapping. 1. I may return to unwrapping if I'll have to do so. I downloaded the code from GERI, it looks like a pure C code, so it might be an easy task to bind it to python. 2. Fabrice's problem involves a smooth image. It may be not to hard to make it. I'll try to code it in the next week, and port it here if I'll succeed. 3. Does anyone know an existing python code to solve the problem presented above? I'll post my code here (when it'll be ready) if some of you are interested. Nadav. -----????? ??????----- ???: numpy-discussion-bounces at scipy.org ??? Jarrod Millman ????: ? 27-??????-08 06:15 ??: Discussion of Numerical Python ????: Re: [Numpy-discussion] 2D phase unwrapping On Tue, Nov 25, 2008 at 11:17 PM, Nadav Horesh wrote: > I read a presentation by GERI (http://www.ljmu.ac.uk/GERI) that their code is implemented in scipy, but I could not find it. One of my colleagues has been using 2D and 3D phase unwrapping code from Munther Gdeisat from GERI: https://cirl.berkeley.edu/trac/browser/bic/trunk/recon-tools/src https://cirl.berkeley.edu/trac/browser/bic/trunk/recon-tools/root/recon/punwrap This code is very high quality and replicating it from scratch would be a fairly daunting task. I was hoping to get this code integrated into SciPy, but no one in my group has had time to do this. Munther Gdeisat and I spoke on the phone and had an email exchange about relicensing his code and integrating it into SciPy. Munther was very interested in having this happen and had some discussions with the Institute Director to get permission for relicencing the code. I have appended our email exchange below. If anyone is interested in picking this up and going through the effort of incorporating this code in scipy I would be happy to help resolve any remaining licensing issues. I also may be able to devote some programming resources to helping out, if someone else volunteers to do the majority of the work. Thanks, ---------- Forwarded message ---------- From: Gdeisat, Munther Date: Fri, Sep 28, 2007 at 1:07 PM Subject: RE: 3D phase unwrap To: Jarrod Millman Cc: Daniel Sheltraw , "Travis E. Oliphant" Dear Jarrod, On behalf of the General Engineering Research Institute (GERI), Liverpool John Moores University, UK, I am very happy to license our 2D and 3D phase unwrappers to use in your NumPy and SciPy libraries. I spoke with this matter with the director of our institute (GERI), prof. Burton, and he is also happy to license the code for both libraries mentioned above. But myself and Prof. Burton would like to stress on the following issues 1- We disclaims all responsibility for the use which is made of the Software. We further disclaim any liability for the outcomes arising from using the Software. 2-We are not obliged to update the software or give any support to the users of the software. We generally help researchers around the world but we are not obliged to do that. Following our phone call, you mentioned to me that you already have these two points mentioned in the license of both libraries. So, I can confirm you that you can include our software in your library. Yours Truly, Dr. Munther Gdeisat The General Engineering Research Institute (GERI) Liverpool John Moores University, UK ________________________________ From: millman.ucb at gmail.com on behalf of Jarrod Millman Sent: Fri 9/28/2007 9:54 PM To: Gdeisat, Munther Cc: Daniel Sheltraw; Travis E. Oliphant Subject: Re: 3D phase unwrap Hello Munther, It was good to speak to you on the phone. I am happy that you will be able to relicense your code for us. Here is the license we use: http://projects.scipy.org/scipy/scipy/browser/trunk/LICENSE.txt It should address all your concerns. Feel free to let me know if you have any questions about it. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ________________________________ From: millman.ucb at gmail.com on behalf of Jarrod Millman Sent: Fri 9/28/2007 3:02 AM To: Gdeisat, Munther Cc: Daniel Sheltraw; Travis E. Oliphant Subject: Re: 3D phase unwrap On 9/26/07, Gdeisat, Munther wrote: > Firstly, I would like to than Daniel to bring us together. I am happy to include the 2D and 3D phase unwrappers in the NumPy/SciPy project. If you need any help regarding this matter such as documentation, I am happy to do so. Kind regards. Hello Munther, I am very excited about the possibility of getting your 2D and 3D phase unwrappers incorporated into SciPy (http://www.scipy.org/ ). Travis Oliphant (the main author of NumPy and a major contributor to SciPy) spoke about where your phase unwrapping coding would best fit, and we both agreed that they belong in SciPy. NumPy and SciPy are both part of the same technology stack. We try to keep NumPy as lean as possible leaving SciPy to provide a more comprehensive set of tools. Here is an article about NumPy/SciPy written by Travis from a recent special issue of IEEE's Computing in Science and Engineering, which was devoted to Python for scientific programming: http://www.computer.org/portal/cms_docs_cise/cise/2007/n3/10-20.pdf Anyway, I am the current release manager of SciPy and am eager to get your phase unwrappers incorporated ASAP. Phase unwrapping is currently missing from SciPy and Daniel has spoken very highly of your algorithms and code. The only potential issue I see involves the licensing. Both SciPy and NumPy are released under a revised BSD license. Your code appears to be owned by the Liverpool John Moores University and the licensing terms impose restrictions that prevent it from being incorporated in SciPy. If you could get the code relicensed with a revised BSD (or MIT) license, that would allow us to use your code. You would still be the author and would retain the copyright of your code. I would be happy to talk with you in more detail about these licensing issues and am very hopeful that you will be able to have the code relicensed. Please let me know if you have any questions. If you want to try and resolve these issues over the phone, my cellphone number is 510-851-0682. If you would like to speak with both Travis and me, we could try setting up a conference call using Skype. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 6407 bytes Desc: not available URL: From scott.sinclair.za at gmail.com Thu Nov 27 01:39:32 2008 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Thu, 27 Nov 2008 08:39:32 +0200 Subject: [Numpy-discussion] What happened to numpy-docs ? In-Reply-To: References: Message-ID: <6a17e9ee0811262239w7db554c8s30f0dd0ef361d7b2@mail.gmail.com> 2008/11/27 Pierre GM : > I'd like to update routines.ma.rst on the numpy/numpy-docs/trunk SVN, > but the whole trunk seems to be MIA... Where has it gone ? How can I > (where should I) commit changes ? Hi Pierre, I've done a little bit of that at http://docs.scipy.org/numpy/docs/numpy-docs/reference/routines.ma.rst Which brings up the question of duplicating effort.. I have been under the impression that the documentation on the doc wiki http://docs.scipy.org/numpy/Front%20Page/ immediately (or at least very quickly) reflected changes in SVN and that changes to the docs in the wiki need to be manually checked in to SVN. Admittedly I have no good reason to make this assumption. Looking at some recent changes made to docstrings in SVN by Pierre (r6110 & r6111), these are not yet reflected in the doc wiki. I guess my question is aimed at Pauli - How frequently does the doc wiki's version of SVN get updated and is this automatic or does it require manual intervention? Thanks, Scott From robert.kern at gmail.com Thu Nov 27 01:46:45 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 27 Nov 2008 00:46:45 -0600 Subject: [Numpy-discussion] What happened to numpy-docs ? In-Reply-To: <5b8d13220811262151k75658c81g47c7311f02ae9c0c@mail.gmail.com> References: <3d375d730811262132w1acb3512reb8b34a0a0bfdc6a@mail.gmail.com> <5b8d13220811262151k75658c81g47c7311f02ae9c0c@mail.gmail.com> Message-ID: <3d375d730811262246n3fbe417cp5a94501192471661@mail.gmail.com> On Wed, Nov 26, 2008 at 23:51, David Cournapeau wrote: > On Thu, Nov 27, 2008 at 2:32 PM, Robert Kern wrote: >> On Wed, Nov 26, 2008 at 23:27, Pierre GM wrote: >>> All, >>> I'd like to update routines.ma.rst on the numpy/numpy-docs/trunk SVN, >>> but the whole trunk seems to be MIA... Where has it gone ? How can I >>> (where should I) commit changes ? >> >> It got moved into the numpy trunk under docs/. > > While we are speaking about the moved docs: is it decided how we will > distribute it ? For now, it is not included in the generated tarball, > but I was wondering how we should distribute it (before, it went into > .../site-packages/numpy/doc). I recommend a numpy-doc-1.x.zip file on the download site. > Distutils does not have the notion of an > installed doc outside the package itself, right ? Nope. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Thu Nov 27 01:52:35 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 27 Nov 2008 01:52:35 -0500 Subject: [Numpy-discussion] What happened to numpy-docs ? In-Reply-To: <6a17e9ee0811262239w7db554c8s30f0dd0ef361d7b2@mail.gmail.com> References: <6a17e9ee0811262239w7db554c8s30f0dd0ef361d7b2@mail.gmail.com> Message-ID: On Nov 27, 2008, at 1:39 AM, Scott Sinclair wrote: > Looking at some recent changes made to docstrings in SVN by Pierre > (r6110 & r6111), these are not yet reflected in the doc wiki. > Well, I haven't committed my version yet. I'm polishing a couple of issues with functions that are not recognized as such by inspect (because they're actually instances of a factory class). From mmetz at astro.uni-bonn.de Thu Nov 27 03:08:41 2008 From: mmetz at astro.uni-bonn.de (Manuel Metz) Date: Thu, 27 Nov 2008 09:08:41 +0100 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <05281EC6-7792-4891-845C-4927B5643E2B@gmail.com> References: <492D56F8.5010807@astro.uni-bonn.de> <492DD3F3.2000808@gmail.com> <05281EC6-7792-4891-845C-4927B5643E2B@gmail.com> Message-ID: <492E5589.1090300@astro.uni-bonn.de> Pierre GM wrote: > On Nov 26, 2008, at 5:55 PM, Ryan May wrote: > >> Manuel Metz wrote: >>> Ryan May wrote: >>>> 3) Better support for missing values. The docstring mentions a >>>> way of >>>> handling missing values by passing in a converter. The problem >>>> with this is >>>> that you have to pass in a converter for *every column* that will >>>> contain >>>> missing values. If you have a text file with 50 columns, writing >>>> this >>>> dictionary of converters seems like ugly and needless >>>> boilerplate. I'm >>>> unsure of how best to pass in both what values indicate missing >>>> values and >>>> what values to fill in their place. I'd love suggestions >>> Hi Ryan, >>> this would be a great feature to have !!! > > About missing values: > > * I don't think missing values should be supported in np.loadtxt. That > should go into a specific np.ma.io.loadtxt function, a preview of > which I posted earlier. I'll modify it taking Ryan's new function into > account, and Chrisopher's suggestion (defining a dictionary {column > name : missing values}. > > * StringConverter already defines some default filling values for each > dtype. In np.ma.io.loadtxt, these values can be overwritten. Note > that you should also be able to define a filling value by specifying a > converter (think float(x or 0) for example) > > * Missing values on space-separated fields are very tricky to handle: > take a line like "a,,,d". With a comma as separator, it's clear that > the 2nd and 3rd fields are missing. > Now, imagine that commas are actually spaces ( "a d"): 'd' is now > seen as the 2nd field of a 2-field record, not as the 4th field of a 4- > field record with 2 missing values. I thought about it, and kicked in > touch > > * That said, there should be a way to deal with fixed-length fields, > probably by taking consecutive slices of the initial string. That way, > we should be able to keep track of missing data... Certainly, yes! Dealing with fixed-length fields would be necessary. The case I had in mind had both -- a separator ("|") __and__ fixed-length fields -- and is probably very special in that sense. But such data-files exists out there... mm From pgmdevlist at gmail.com Thu Nov 27 03:18:24 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 27 Nov 2008 03:18:24 -0500 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <492E5589.1090300@astro.uni-bonn.de> References: <492D56F8.5010807@astro.uni-bonn.de> <492DD3F3.2000808@gmail.com> <05281EC6-7792-4891-845C-4927B5643E2B@gmail.com> <492E5589.1090300@astro.uni-bonn.de> Message-ID: On Nov 27, 2008, at 3:08 AM, Manuel Metz wrote: >> > > Certainly, yes! Dealing with fixed-length fields would be necessary. > The > case I had in mind had both -- a separator ("|") __and__ fixed-length > fields -- and is probably very special in that sense. But such > data-files exists out there... Well, if you have a non-space delimiter, it doesn't matter if the fields have a fixed length or not, does it? Each field is stripped anyway. The real issue is when the delimiter is ' '... I should be able to take care of that over the week-end (which started earlier today over here :) From nwagner at iam.uni-stuttgart.de Thu Nov 27 03:20:56 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 27 Nov 2008 09:20:56 +0100 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <492E5589.1090300@astro.uni-bonn.de> References: <492D56F8.5010807@astro.uni-bonn.de> <492DD3F3.2000808@gmail.com> <05281EC6-7792-4891-845C-4927B5643E2B@gmail.com> <492E5589.1090300@astro.uni-bonn.de> Message-ID: On Thu, 27 Nov 2008 09:08:41 +0100 Manuel Metz wrote: > Pierre GM wrote: >> On Nov 26, 2008, at 5:55 PM, Ryan May wrote: >> >>> Manuel Metz wrote: >>>> Ryan May wrote: >>>>> 3) Better support for missing values. The docstring >>>>>mentions a >>>>> way of >>>>> handling missing values by passing in a converter. The >>>>>problem >>>>> with this is >>>>> that you have to pass in a converter for *every column* >>>>>that will >>>>> contain >>>>> missing values. If you have a text file with 50 >>>>>columns, writing >>>>> this >>>>> dictionary of converters seems like ugly and needless >>>>> boilerplate. I'm >>>>> unsure of how best to pass in both what values indicate >>>>>missing >>>>> values and >>>>> what values to fill in their place. I'd love >>>>>suggestions >>>> Hi Ryan, >>>> this would be a great feature to have !!! >> >> About missing values: >> >> * I don't think missing values should be supported in >>np.loadtxt. That >> should go into a specific np.ma.io.loadtxt function, a >>preview of >> which I posted earlier. I'll modify it taking Ryan's new >>function into >> account, and Chrisopher's suggestion (defining a >>dictionary {column >> name : missing values}. >> >> * StringConverter already defines some default filling >>values for each >> dtype. In np.ma.io.loadtxt, these values can be >>overwritten. Note >> that you should also be able to define a filling value >>by specifying a >> converter (think float(x or 0) for example) >> >> * Missing values on space-separated fields are very >>tricky to handle: >> take a line like "a,,,d". With a comma as separator, >>it's clear that >> the 2nd and 3rd fields are missing. >> Now, imagine that commas are actually spaces ( "a >> d"): 'd' is now >> seen as the 2nd field of a 2-field record, not as the >>4th field of a 4- >> field record with 2 missing values. I thought about it, >>and kicked in >> touch >> >> * That said, there should be a way to deal with >>fixed-length fields, >> probably by taking consecutive slices of the initial >>string. That way, >> we should be able to keep track of missing data... > > Certainly, yes! Dealing with fixed-length fields would >be necessary. The > case I had in mind had both -- a separator ("|") __and__ >fixed-length > fields -- and is probably very special in that sense. >But such > data-files exists out there... > See page 9, 10 (Bulk data input deck) http://www.zonatech.com/Documentation/zndalusersmanual2.0.pdf Nils From ferrell at diablotech.com Thu Nov 27 10:14:03 2008 From: ferrell at diablotech.com (Robert Ferrell) Date: Thu, 27 Nov 2008 08:14:03 -0700 Subject: [Numpy-discussion] Masked array usage Message-ID: <3E8FC326-4DEE-4E4D-A476-7213F3F141BE@diablotech.com> I have a question about assigning to masked arrays. a is a len ==3 masked array, with 2 unmasked elements. b is a len == 2 array. I want to put the elements of b into the unmasked elements of a. How do I do that? In [598]: a Out[598]: masked_array(data = [1 -- 3], mask = [False True False], fill_value=999999) In [599]: b Out[599]: array([7, 8]) I'd like an operation that gives me: masked_array(data = [7 -- 8], mask = [False True False], fill_value=999999) Seems like it shouldn't be that hard, but I can't figure it out. Any suggestions? thanks, -robert From amcmorl at gmail.com Thu Nov 27 10:41:22 2008 From: amcmorl at gmail.com (Angus McMorland) Date: Thu, 27 Nov 2008 10:41:22 -0500 Subject: [Numpy-discussion] Masked array usage In-Reply-To: <3E8FC326-4DEE-4E4D-A476-7213F3F141BE@diablotech.com> References: <3E8FC326-4DEE-4E4D-A476-7213F3F141BE@diablotech.com> Message-ID: 2008/11/27 Robert Ferrell : > I have a question about assigning to masked arrays. a is a len ==3 > masked array, with 2 unmasked elements. b is a len == 2 array. I > want to put the elements of b into the unmasked elements of a. How do > I do that? > > In [598]: a > Out[598]: > masked_array(data = [1 -- 3], > mask = [False True False], > fill_value=999999) > > > In [599]: b > Out[599]: array([7, 8]) > > I'd like an operation that gives me: > > masked_array(data = [7 -- 8], > mask = [False True False], > fill_value=999999) > > Seems like it shouldn't be that hard, but I can't figure it out. Any > suggestions? How about: c = a.copy() c[~a.mask] = b Angus. -- AJC McMorland Post-doctoral research fellow Neurobiology, University of Pittsburgh From ferrell at diablotech.com Thu Nov 27 10:54:09 2008 From: ferrell at diablotech.com (Robert Ferrell) Date: Thu, 27 Nov 2008 08:54:09 -0700 Subject: [Numpy-discussion] Masked array usage In-Reply-To: References: <3E8FC326-4DEE-4E4D-A476-7213F3F141BE@diablotech.com> Message-ID: <85D271B6-9A62-4FEE-9BDD-450167864799@diablotech.com> Sweet. So simple. That works great. thanks, -robert On Nov 27, 2008, at 8:41 AM, Angus McMorland wrote: > 2008/11/27 Robert Ferrell : >> I have a question about assigning to masked arrays. a is a len ==3 >> masked array, with 2 unmasked elements. b is a len == 2 array. I >> want to put the elements of b into the unmasked elements of a. How >> do >> I do that? >> >> In [598]: a >> Out[598]: >> masked_array(data = [1 -- 3], >> mask = [False True False], >> fill_value=999999) >> >> >> In [599]: b >> Out[599]: array([7, 8]) >> >> I'd like an operation that gives me: >> >> masked_array(data = [7 -- 8], >> mask = [False True False], >> fill_value=999999) >> >> Seems like it shouldn't be that hard, but I can't figure it out. Any >> suggestions? > > How about: > > c = a.copy() > c[~a.mask] = b > > Angus. > -- > AJC McMorland > Post-doctoral research fellow > Neurobiology, University of Pittsburgh > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From uschmitt at mineway.de Thu Nov 27 11:03:51 2008 From: uschmitt at mineway.de (Uwe Schmitt) Date: Thu, 27 Nov 2008 17:03:51 +0100 Subject: [Numpy-discussion] split matrix Message-ID: <492EC4E7.2000001@mineway.de> Hi, is there an effective way to remove a row with a given index from a matrix ? Greetings, Uwe -- Dr. rer. nat. Uwe Schmitt F&E Mathematik mineway GmbH Science Park 2 D-66123 Saarbr?cken Telefon: +49 (0)681 8390 5334 Telefax: +49 (0)681 830 4376 uschmitt at mineway.de www.mineway.de Gesch?ftsf?hrung: Dr.-Ing. Mathias Bauer Amtsgericht Saarbr?cken HRB 12339 From william at resolversystems.com Thu Nov 27 12:28:58 2008 From: william at resolversystems.com (William Reade) Date: Thu, 27 Nov 2008 17:28:58 +0000 Subject: [Numpy-discussion] Ironclad v0.7 released (NumPy on IronPython) Message-ID: <492ED8DA.3090701@resolversystems.com> Hi all Hopefully someone here will be interested in this, and it won't be considered too spammy... please let me know if this isn't welcome, and I'll desist in future. I'm delighted to announce the release of Ironclad v0.7, which is now available from http://code.google.com/p/ironclad/downloads/list . This release is a major step forward: * Runs transparently on vanilla IronPython 2.0RC2, without creating extra PythonEngines or breaking .NET namespace imports * Many numpy 1.2 tests (from the core, fft, lib, linalg and random subpackages) now reliably pass (run "ipy numpytests.py" from the build directory) * Significant performance improvements (by several orders of magnitude in some places :D) So... if you want to use numpy (or other C extension modules) with IronPython on Win32, please download it and try it out; I'm very keen to hear your experiences, and to know which neglected features will be most useful to you. Cheers William From oliphant at enthought.com Thu Nov 27 13:15:46 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Thu, 27 Nov 2008 12:15:46 -0600 Subject: [Numpy-discussion] Ironclad v0.7 released (NumPy on IronPython) In-Reply-To: <492ED8DA.3090701@resolversystems.com> References: <492ED8DA.3090701@resolversystems.com> Message-ID: <492EE3D2.2030306@enthought.com> William Reade wrote: > Hi all > > Hopefully someone here will be interested in this, and it won't be > considered too spammy... please let me know if this isn't welcome, and > I'll desist in future. > I welcome these announcements, so my opinion is that you continue. Thanks for the work. It's great to see a path for running C extensions on IronPython. -Travis From pav at iki.fi Thu Nov 27 13:42:31 2008 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 27 Nov 2008 18:42:31 +0000 (UTC) Subject: [Numpy-discussion] What happened to numpy-docs ? References: <6a17e9ee0811262239w7db554c8s30f0dd0ef361d7b2@mail.gmail.com> Message-ID: Thu, 27 Nov 2008 08:39:32 +0200, Scott Sinclair wrote: [clip] > I have been under the impression that the documentation on the doc wiki > http://docs.scipy.org/numpy/Front%20Page/ immediately (or at least very > quickly) reflected changes in SVN and that changes to the docs in the > wiki need to be manually checked in to SVN. Admittedly I have no good > reason to make this assumption. > > Looking at some recent changes made to docstrings in SVN by Pierre > (r6110 & r6111), these are not yet reflected in the doc wiki. I guess my > question is aimed at Pauli - How frequently does the doc wiki's version > of SVN get updated and is this automatic or does it require manual > intervention? It's manual, somebody with admin privileges must go and click a button to update it. But there's no reason why it couldn't be automatic. It should be trivial to rig up a cron job that runs whenever there are new revisions in SVN, so let's put this in the todo list. -- Pauli Virtanen From pav at iki.fi Thu Nov 27 13:58:38 2008 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 27 Nov 2008 18:58:38 +0000 (UTC) Subject: [Numpy-discussion] What happened to numpy-docs ? References: <3d375d730811262132w1acb3512reb8b34a0a0bfdc6a@mail.gmail.com> Message-ID: Thu, 27 Nov 2008 01:13:19 -0500, Pierre GM wrote: [clip] > Pauli, do you think you could put your numpyext in the doc/ directory as > well ? Yes, Numpy SVN would probably be a more natural place for the stuff. -- Pauli Virtanen From nwagner at iam.uni-stuttgart.de Thu Nov 27 15:49:07 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 27 Nov 2008 21:49:07 +0100 Subject: [Numpy-discussion] split matrix In-Reply-To: <492EC4E7.2000001@mineway.de> References: <492EC4E7.2000001@mineway.de> Message-ID: On Thu, 27 Nov 2008 17:03:51 +0100 Uwe Schmitt wrote: > Hi, > > is there an effective way to remove a row with a given >index from > a matrix ? > >>> A = rand(10,5) >>> A array([[ 0.15976517, 0.29574162, 0.21537014, 0.69341324, 0.68713389], [ 0.28992634, 0.89714962, 0.90299203, 0.22203182, 0.57831945], [ 0.23814492, 0.09436163, 0.67062125, 0.85923647, 0.64548996], [ 0.83215097, 0.85178335, 0.49873409, 0.59021905, 0.94631569], [ 0.5494401 , 0.08831399, 0.54776161, 0.10043204, 0.88260609], [ 0.90951225, 0.40967777, 0.78577964, 0.17414472, 0.59568316], [ 0.97491997, 0.76869065, 0.88901626, 0.69693058, 0.73576195], [ 0.25971704, 0.67759869, 0.42972164, 0.15069627, 0.13269489], [ 0.50012917, 0.5866074 , 0.32205757, 0.3347558 , 0.02555147], [ 0.66448744, 0.14755343, 0.09963282, 0.22277848, 0.35620143]]) >>> ind array([0, 0, 0, 0, 1, 0, 0, 0, 0, 0]) >>> A = A[ind==0,:][:,:] >>> A array([[ 0.15976517, 0.29574162, 0.21537014, 0.69341324, 0.68713389], [ 0.28992634, 0.89714962, 0.90299203, 0.22203182, 0.57831945], [ 0.23814492, 0.09436163, 0.67062125, 0.85923647, 0.64548996], [ 0.83215097, 0.85178335, 0.49873409, 0.59021905, 0.94631569], [ 0.90951225, 0.40967777, 0.78577964, 0.17414472, 0.59568316], [ 0.97491997, 0.76869065, 0.88901626, 0.69693058, 0.73576195], [ 0.25971704, 0.67759869, 0.42972164, 0.15069627, 0.13269489], [ 0.50012917, 0.5866074 , 0.32205757, 0.3347558 , 0.02555147], [ 0.66448744, 0.14755343, 0.09963282, 0.22277848, 0.35620143]]) Nils From scott.sinclair.za at gmail.com Fri Nov 28 00:45:22 2008 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Fri, 28 Nov 2008 07:45:22 +0200 Subject: [Numpy-discussion] What happened to numpy-docs ? In-Reply-To: References: <6a17e9ee0811262239w7db554c8s30f0dd0ef361d7b2@mail.gmail.com> Message-ID: <6a17e9ee0811272145j50f6705akc37dcad338869b4b@mail.gmail.com> 2008/11/27 Pauli Virtanen : > Thu, 27 Nov 2008 08:39:32 +0200, Scott Sinclair wrote: > [clip] >> I have been under the impression that the documentation on the doc wiki >> http://docs.scipy.org/numpy/Front%20Page/ immediately (or at least very >> quickly) reflected changes in SVN and that changes to the docs in the >> wiki need to be manually checked in to SVN. Admittedly I have no good >> reason to make this assumption. > > It's manual, somebody with admin privileges must go and click a button to > update it. > > But there's no reason why it couldn't be automatic. It should be trivial > to rig up a cron job that runs whenever there are new revisions in SVN, > so let's put this in the todo list. I think this is a sensible goal, people editing in the wiki may not be aware of what's happening in SVN. Nice to see that the Scipy docs are now available as well! Cheers, Scott From charlesr.harris at gmail.com Fri Nov 28 00:53:36 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 27 Nov 2008 22:53:36 -0700 Subject: [Numpy-discussion] Name changes and suggested file name change for Pauli. Message-ID: Hi All, I'm thinking of changing the names of fmax and fmin to fmaximum and fminimum so that fmax and fmin can play the roles corresponding to max and min. Should I add the names atanh, asinh, and acosh as aliases for arctanh, arcsinh, and arccosh? The vote looked pretty evenly split. If we add them, I suggest we merely add a note to the documentation of the old functions suggesting use of the new names to conform to general practice. A while ago I added deg2rad and rad2deg as aliases for radians and degrees respectively, so this can be seen as more of the same. Pauli, can you change the name of code_generators/docstrings to something more descriptive? I think ufunc_docstrings would be a bit clearer. I expect this requires various fixups here and there, so I'm tossing the problem over to you. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjhnson at gmail.com Fri Nov 28 04:50:17 2008 From: tjhnson at gmail.com (T J) Date: Fri, 28 Nov 2008 01:50:17 -0800 Subject: [Numpy-discussion] Shape (z,0) Message-ID: >>> import numpy as np >>> x = np.ones((3,0)) >>> x array([], shape(3,0), dtype=float64) To preempt, I'm not really concerned with the answer to: Why would anyone want to do this? I just want to know what is happening. Especially, with >>> x[0,:] = 5 (which works). It seems that nothing is really happening here...given that, why is it allowed? Ie, are there reasons for not requiring the shape dimensions to be greater than 0? From aarchiba at physics.mcgill.ca Fri Nov 28 05:25:32 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Fri, 28 Nov 2008 05:25:32 -0500 Subject: [Numpy-discussion] Shape (z,0) In-Reply-To: References: Message-ID: 2008/11/28 T J : >>>> import numpy as np >>>> x = np.ones((3,0)) >>>> x > array([], shape(3,0), dtype=float64) > > To preempt, I'm not really concerned with the answer to: Why would > anyone want to do this? > > I just want to know what is happening. Especially, with > >>>> x[0,:] = 5 > > (which works). It seems that nothing is really happening here...given > that, why is it allowed? Ie, are there reasons for not requiring the > shape dimensions to be greater than 0? So that scripts can work transparently with arrays of all sizes: In [1]: import numpy as np In [3]: a = np.random.randn(5); b = a[a>1]; print b.shape (1,) In [4]: a = np.random.randn(5); b = a[a>1]; print b.shape (1,) In [5]: a = np.random.randn(5); b = a[a>1]; print b.shape (0,) In [10]: b[:]=b[:]-1 The ":" just means "all", so it's fine to use it if there aren't any ("all of none"). Basically, if this were not permitted there would be a zillion little corner cases code that was trying to be generic would have to deal with. There is a similar issue with zero-dimensional arrays for code that is trying to be generic for number of dimensions. That is, you want to be able to do something like: In [12]: a = a-np.mean(a,axis=-1)[...,np.newaxis] and have it work whether a is an n-dimensional array, in which case np.mean(a,axis=-1) is (n-1)-dimensional, or a is a one-dimensional array, in which case np.mean(a,axis=-1) is a zero-dimensional array, or maybe a scalar: In [13]: np.mean(a) Out[13]: 0.0 In [14]: 0.0[...,np.newaxis] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /homes/janeway/aarchiba/ in () TypeError: 'float' object is unsubscriptable In [15]: np.mean(a)[...,np.newaxis] Out[15]: array([ 0.]) Normalizing this stuff is still ongoing; it's tricky, because you often want something like np.mean(a) to be just a number, but generic code wants it to behave like a zero-dimensional array. Currently numpy supports both "array scalars", that is, numbers of array dtypes, and zero-dimensional arrays; they behave mostly alike, but there are a few inconsistencies (and it's arguably redundant to have both). That said, it is often far easier to write generic code by flattening the input array, dealing with it as a guaranteed-one-dimensional array, then reconstructing the correct shape at the end, but such code is kind of a pain to write. Anne From mmetz at astro.uni-bonn.de Fri Nov 28 05:42:24 2008 From: mmetz at astro.uni-bonn.de (Manuel Metz) Date: Fri, 28 Nov 2008 11:42:24 +0100 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: References: <492D56F8.5010807@astro.uni-bonn.de> <492DD3F3.2000808@gmail.com> <05281EC6-7792-4891-845C-4927B5643E2B@gmail.com> <492E5589.1090300@astro.uni-bonn.de> Message-ID: <492FCB10.8050404@astro.uni-bonn.de> Pierre GM wrote: > On Nov 27, 2008, at 3:08 AM, Manuel Metz wrote: >> Certainly, yes! Dealing with fixed-length fields would be necessary. >> The >> case I had in mind had both -- a separator ("|") __and__ fixed-length >> fields -- and is probably very special in that sense. But such >> data-files exists out there... > > Well, if you have a non-space delimiter, it doesn't matter if the > fields have a fixed length or not, does it? Each field is stripped > anyway. Yes. It would already be _very_ helpful (without changing loadtxt too much) if the current implementation uses a converter like this def fval(val): try: return float(val) except: return numpy.nan instead of float(val) by default. mm > The real issue is when the delimiter is ' '... I should be able to > take care of that over the week-end (which started earlier today over > here :) > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From pgmdevlist at gmail.com Fri Nov 28 14:07:40 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 28 Nov 2008 14:07:40 -0500 Subject: [Numpy-discussion] More loadtxt() changes In-Reply-To: <492FCB10.8050404@astro.uni-bonn.de> References: <492D56F8.5010807@astro.uni-bonn.de> <492DD3F3.2000808@gmail.com> <05281EC6-7792-4891-845C-4927B5643E2B@gmail.com> <492E5589.1090300@astro.uni-bonn.de> <492FCB10.8050404@astro.uni-bonn.de> Message-ID: Manuel, Give me the week-end to come up with something. What you want is already doable with the current implementation of np.loadtxt, through the converter keyword. Support for missing data will be covered in a separate function, most likely to be put in numpy.ma.io at term. On Nov 28, 2008, at 5:42 AM, Manuel Metz wrote: > Pierre GM wrote: >> On Nov 27, 2008, at 3:08 AM, Manuel Metz wrote: >>> Certainly, yes! Dealing with fixed-length fields would be necessary. >>> The >>> case I had in mind had both -- a separator ("|") __and__ fixed- >>> length >>> fields -- and is probably very special in that sense. But such >>> data-files exists out there... >> >> Well, if you have a non-space delimiter, it doesn't matter if the >> fields have a fixed length or not, does it? Each field is stripped >> anyway. > > Yes. It would already be _very_ helpful (without changing loadtxt too > much) if the current implementation uses a converter like this > > def fval(val): > try: > return float(val) > except: > return numpy.nan > > instead of float(val) by default. > > mm > >> The real issue is when the delimiter is ' '... I should be able to >> take care of that over the week-end (which started earlier today over >> here :) >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From zachary.pincus at yale.edu Fri Nov 28 15:35:23 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Fri, 28 Nov 2008 15:35:23 -0500 Subject: [Numpy-discussion] Compiler options for mingw? Message-ID: <96BABBCF-EF9D-4AF7-8BE4-03685EB080B2@yale.edu> Hi all, I'm curious about how to control compiler options for mingw builds of numpy on windows... Specifically, I want to build binaries without SSE support, so that they can run on older hardware. Setting a CFLAGS variable on the command-line doesn't appear to do anything, but perhaps appearances are deceiving. Thanks for any suggestions -- I've googled fruitlessly for a while. Zach From cournape at gmail.com Fri Nov 28 16:02:01 2008 From: cournape at gmail.com (David Cournapeau) Date: Sat, 29 Nov 2008 06:02:01 +0900 Subject: [Numpy-discussion] Compiler options for mingw? In-Reply-To: <96BABBCF-EF9D-4AF7-8BE4-03685EB080B2@yale.edu> References: <96BABBCF-EF9D-4AF7-8BE4-03685EB080B2@yale.edu> Message-ID: <5b8d13220811281302n756a3b95ka1c6e7287cb23ae0@mail.gmail.com> On Sat, Nov 29, 2008 at 5:35 AM, Zachary Pincus wrote: > Hi all, > > I'm curious about how to control compiler options for mingw builds of > numpy on windows... Specifically, I want to build binaries without SSE > support, so that they can run on older hardware. The windows binaries of numpy can run on machines without SSE support. If for some reason you want to build it by yourself, you just need a BLAS/LAPACK without SSE - assuming you want BLAS/LAPACK. Note that depending on your intent, all the scripts to generate the full binary (which install the most optimized binary depending on the detected arch) are in svn too, if that's something you want to do. I have recently remove the arch specific optimization in numpy.distutils, so this should not be a problem either. David From zachary.pincus at yale.edu Fri Nov 28 17:24:19 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Fri, 28 Nov 2008 17:24:19 -0500 Subject: [Numpy-discussion] Compiler options for mingw? In-Reply-To: <5b8d13220811281302n756a3b95ka1c6e7287cb23ae0@mail.gmail.com> References: <96BABBCF-EF9D-4AF7-8BE4-03685EB080B2@yale.edu> <5b8d13220811281302n756a3b95ka1c6e7287cb23ae0@mail.gmail.com> Message-ID: <3AE60785-BEC5-4AC8-A914-E63A940225A9@yale.edu> >> I'm curious about how to control compiler options for mingw builds of >> numpy on windows... Specifically, I want to build binaries without >> SSE >> support, so that they can run on older hardware. > > The windows binaries of numpy can run on machines without SSE support. > If for some reason you want to build it by yourself, you just need a > BLAS/LAPACK without SSE - assuming you want BLAS/LAPACK. Note that > depending on your intent, all the scripts to generate the full binary > (which install the most optimized binary depending on the detected > arch) are in svn too, if that's something you want to do. > > I have recently remove the arch specific optimization in > numpy.distutils, so this should not be a problem either. Thanks for the information, David! Regarding the windows binary installer, it appears that it selects the optimized/unoptimized binary at *install time*, which causes complications for further bundling of numpy with e.g. py2exe for use on other machines. (E.g. my dev machine has SSE3, and it appears that SSE3-optimized binaries get installed on it, which then causes crashing when I bundle up a numpy script with py2exe ad run it on an older box.) Is this in fact the case? If so, is there any easy way to force the installer to just use the basic unoptimized configuration? That would be the best... On the other hand, if I'm using a SSE-free BLAS/LAPACK or non at all, there'll be no SSE optimization done? I understand that gcc4, and thus mingw derived from that version, will automatically try to use sse instructions where possible if not specifically disabled, which is what induced my original question. So, to be certain that gcc isn't introducing sse instructions under the covers, I would still like to know if there's a way to pass compiler flags to the build stage of numpy. On UNIX, CFLAGS seems to do the trick, but on windows with mingw the flags don't seem to be recognized... (E.g. setting CFLAGS to '-mfoo' causes an invalid option error on OS X, but not with windows.) Thanks again, Zach From simpson at math.toronto.edu Fri Nov 28 17:38:01 2008 From: simpson at math.toronto.edu (Gideon Simpson) Date: Fri, 28 Nov 2008 17:38:01 -0500 Subject: [Numpy-discussion] os x, intel compilers & mkl, and fink python Message-ID: <10D66598-1DD4-46D9-BC84-5998E06C01F5@math.toronto.edu> Has anyone gotten the combination of OS X with a fink python distribution to successfully build numpy/scipy with the intel compilers and the mkl? If so, how'd you do it? -gideon From stefan at sun.ac.za Sat Nov 29 09:52:49 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 29 Nov 2008 16:52:49 +0200 Subject: [Numpy-discussion] numpy.loadtxt requires seek()? In-Reply-To: <4925C169.4090702@gmail.com> References: <4925B8FE.7080708@gmail.com> <9457e7c80811201141h26d9fd45j23e0b93f077111e3@mail.gmail.com> <4925C169.4090702@gmail.com> Message-ID: <9457e7c80811290652r3f21386bjb6d23bb77235cd76@mail.gmail.com> 2008/11/20 Ryan May : > I've attached a simple patch that changes the check for seek() to a > check for readline(). I'll punt on my idea of just using iterators, > since that seems like slightly greater complexity for no gain. (I'm not > sure how many people end up with data in a list of strings and wish they > could pass that to loadtxt). > > While you're at it, would you commit my patch to add support for bzipped > files as well (attached)? Thanks, applied. St?fan From silva at lma.cnrs-mrs.fr Sat Nov 29 10:19:46 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Sat, 29 Nov 2008 16:19:46 +0100 Subject: [Numpy-discussion] 2D phase unwrapping In-Reply-To: References: <710F2847B0018641891D9A216027636029C346@ex3.envision.co.il> Message-ID: <1227971987.3167.5.camel@Portable-s2m.cnrs-mrs.fr> Le mercredi 26 novembre 2008 ? 20:15 -0800, Jarrod Millman a ?crit : > One of my colleagues has been using 2D and 3D phase unwrapping code > from Munther Gdeisat from GERI: > https://cirl.berkeley.edu/trac/browser/bic/trunk/recon-tools/src > https://cirl.berkeley.edu/trac/browser/bic/trunk/recon-tools/root/recon/punwrap > This code is very high quality and replicating it from scratch would be > a fairly daunting task. [...] If anyone is interested in picking this > up and going through the effort of incorporating this code in scipy I > would be happy to help resolve any remaining licensing issues. I also > may be able to devote some programming resources to helping out, if > someone else volunteers to do the majority of the work. So what is expected now ? What have to be done in order to include it in scipy ? -- Fabrice Silva From silva at lma.cnrs-mrs.fr Sat Nov 29 10:44:28 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Sat, 29 Nov 2008 16:44:28 +0100 Subject: [Numpy-discussion] error importing a f2py compiled module. In-Reply-To: <1214206686.3133.14.camel@Portable-s2m.cnrs-mrs.fr> References: <1214206686.3133.14.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <1227973468.3167.11.camel@Portable-s2m.cnrs-mrs.fr> Hi all, I am facing this old problem again : Fabrice Silva a ?crit : > ?Dear all > I've tried to run f2py on a fortran file which used to be usable from > python some months ago. > Following command line are applied with success (no errors raised) : > f2py -m modulename -c --f90exec=gnu95 tmpo.f > > When importing in Python with "import modulename", I have an > ImportError: > Traceback (most recent call last): > File "Solveur.py", line 44, in > import modulename as Modele > ImportError: modulename.so: failed to map segment from shared > object: Operation not permitted A way of solving this issue was to move the shared object file to another directory. But I want to figure out what is happening exactly. Googling a lot indicates that selinux would be the cause of this issue... Has anyone a suggestion? -- Fabrice Silva LMA UPR CNRS 7051 - ?quipe S2M From nadavh at visionsense.com Sat Nov 29 14:12:51 2008 From: nadavh at visionsense.com (Nadav Horesh) Date: Sat, 29 Nov 2008 21:12:51 +0200 Subject: [Numpy-discussion] 2D phase unwrapping References: <710F2847B0018641891D9A216027636029C346@ex3.envision.co.il> <1227971987.3167.5.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <710F2847B0018641891D9A216027636029C355@ex3.envision.co.il> >From my point of view: If I'll need it, I'll produce a python binding to GERI's code. My schedule is too tight to start this project if not strictly necessary. I have as idea of a simple code that might work for smooth images, that I might try in a few days. I am working on a code to calculate the derivative of the phase avoiding the unwrap. I'll release the code on a request with a BSD licence. Nadav. -----????? ??????----- ???: numpy-discussion-bounces at scipy.org ??? Fabrice Silva ????: ? 29-??????-08 17:19 ??: Discussion of Numerical Python ????: Re: [Numpy-discussion] 2D phase unwrapping Le mercredi 26 novembre 2008 ? 20:15 -0800, Jarrod Millman a ?crit : > One of my colleagues has been using 2D and 3D phase unwrapping code > from Munther Gdeisat from GERI: > https://cirl.berkeley.edu/trac/browser/bic/trunk/recon-tools/src > https://cirl.berkeley.edu/trac/browser/bic/trunk/recon-tools/root/recon/punwrap > This code is very high quality and replicating it from scratch would be > a fairly daunting task. [...] If anyone is interested in picking this > up and going through the effort of incorporating this code in scipy I > would be happy to help resolve any remaining licensing issues. I also > may be able to devote some programming resources to helping out, if > someone else volunteers to do the majority of the work. So what is expected now ? What have to be done in order to include it in scipy ? -- Fabrice Silva _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3727 bytes Desc: not available URL: From silva at lma.cnrs-mrs.fr Sat Nov 29 15:57:27 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Sat, 29 Nov 2008 21:57:27 +0100 Subject: [Numpy-discussion] 2D phase unwrapping In-Reply-To: <710F2847B0018641891D9A216027636029C355@ex3.envision.co.il> References: <710F2847B0018641891D9A216027636029C346@ex3.envision.co.il> <1227971987.3167.5.camel@Portable-s2m.cnrs-mrs.fr> <710F2847B0018641891D9A216027636029C355@ex3.envision.co.il> Message-ID: <1227992248.3167.16.camel@Portable-s2m.cnrs-mrs.fr> Le samedi 29 novembre 2008 ? 21:12 +0200, Nadav Horesh a ?crit : > From my point of view: If I'll need it, I'll produce a python binding > to GERI's code. My schedule is too tight to start this project if not > strictly necessary. I have as idea of a simple code that might work > for smooth images, that I might try in a few days. I am working on a > code to calculate the derivative of the phase avoiding the unwrap. > I'll release the code on a request with a BSD licence. Did you look at the punwrap code (links provided by Jarrod) > > https://cirl.berkeley.edu/trac/browser/bic/trunk/recon-tools/src > > https://cirl.berkeley.edu/trac/browser/bic/trunk/recon-tools/root/recon/punwrap There are already python bindings to 2d and 3d unwrap code. I was asking what have to be done to include that in scipy or a scikit... Having a tight schedule too, it seems to me that this task would take less time than writing a binding to GERI code. -- Fabrice Silva LMA UPR CNRS 7051 - ?quipe S2M From david at ar.media.kyoto-u.ac.jp Sun Nov 30 00:47:39 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 30 Nov 2008 14:47:39 +0900 Subject: [Numpy-discussion] error importing a f2py compiled module. In-Reply-To: <1227973468.3167.11.camel@Portable-s2m.cnrs-mrs.fr> References: <1214206686.3133.14.camel@Portable-s2m.cnrs-mrs.fr> <1227973468.3167.11.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <493228FB.7090801@ar.media.kyoto-u.ac.jp> Fabrice Silva wrote: > > A way of solving this issue was to move the shared object file to > another directory. But I want to figure out what is happening exactly. > Googling a lot indicates that selinux would be the cause of this > issue... > Has anyone a suggestion? > Disabling selinux ? David From david at ar.media.kyoto-u.ac.jp Sun Nov 30 01:02:20 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 30 Nov 2008 15:02:20 +0900 Subject: [Numpy-discussion] Compiler options for mingw? In-Reply-To: <3AE60785-BEC5-4AC8-A914-E63A940225A9@yale.edu> References: <96BABBCF-EF9D-4AF7-8BE4-03685EB080B2@yale.edu> <5b8d13220811281302n756a3b95ka1c6e7287cb23ae0@mail.gmail.com> <3AE60785-BEC5-4AC8-A914-E63A940225A9@yale.edu> Message-ID: <49322C6C.6070400@ar.media.kyoto-u.ac.jp> Zachary Pincus wrote: > Regarding the windows binary installer, it appears that it selects the > optimized/unoptimized binary at *install time*, which causes > complications for further bundling of numpy with e.g. py2exe for use > on other machines. (E.g. my dev machine has SSE3, and it appears that > SSE3-optimized binaries get installed on it, which then causes > crashing when I bundle up a numpy script with py2exe ad run it on an > older box.) Is this in fact the case? Yes > If so, is there any easy way to > force the installer to just use the basic unoptimized configuration? No at the moment, but you can easily decompress the .exe content to get the internal .exe (which are straight installers built by python setup.py setup.py bdist_wininst). It should be possible to force an architecture at install time using a command line option, but I don't have the time ATM to support this. If you are willing to add the option, you should add the option to the template .nsi file: http://projects.scipy.org/scipy/numpy/browser/trunk/tools/win32build/nsis_scripts/numpy-superinstaller.nsi.in The documentation is on nsis webpage: http://nsis.sourceforge.net/Main_Page > On the other hand, if I'm using a SSE-free BLAS/LAPACK or non at all, > there'll be no SSE optimization done? I understand that gcc4, and thus > mingw derived from that version, will automatically try to use sse > instructions where possible if not specifically disabled, which is > what induced my original question. First, mingw uses gcc3, unless you use a custom built or alpha release. Also, I am pretty sure that by default, gcc does NOT use any SSE optimization. You have to request it. > So, to be certain that gcc isn't > introducing sse instructions under the covers, I would still like to > know if there's a way to pass compiler flags to the build stage of > numpy. On UNIX, CFLAGS seems to do the trick, but on windows with > mingw the flags don't seem to be recognized... (E.g. setting CFLAGS to > '-mfoo' causes an invalid option error on OS X, but not with windows.) This may be due to the 'shell' in windows, I don't know. Generally, using CFLAGS is not reliable with numpy.distutils. But again, in numpy trunk, no arch specific flag is added by numpy.distutils anymore, so you should not need it. David From silva at lma.cnrs-mrs.fr Sun Nov 30 06:48:40 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Sun, 30 Nov 2008 12:48:40 +0100 Subject: [Numpy-discussion] error importing a f2py compiled module. In-Reply-To: <493228FB.7090801@ar.media.kyoto-u.ac.jp> References: <1214206686.3133.14.camel@Portable-s2m.cnrs-mrs.fr> <1227973468.3167.11.camel@Portable-s2m.cnrs-mrs.fr> <493228FB.7090801@ar.media.kyoto-u.ac.jp> Message-ID: <1228045720.12356.7.camel@Portable-s2m.cnrs-mrs.fr> Le dimanche 30 novembre 2008 ? 14:47 +0900, David Cournapeau a ?crit : > Fabrice Silva wrote: > > > > A way of solving this issue was to move the shared object file to > > another directory. But I want to figure out what is happening exactly. > > Googling a lot indicates that selinux would be the cause of this > > issue... Has anyone a suggestion? > Disabling selinux ? It might be an acceptable answer, but I found another : using f2py-compiled modules seems not to work on ext3 file-systems without the exec option. I only needed to add the 'exec' option in /etc/fstab (debian). -- Fabrice Silva LMA UPR CNRS 7051 - ?quipe S2M From dsdale24 at gmail.com Sun Nov 30 17:00:10 2008 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 30 Nov 2008 17:00:10 -0500 Subject: [Numpy-discussion] ANNOUNCE: EPD with Py2.5 version 4.0.30002 RC2 available for testing In-Reply-To: <492DC9B0.1030300@gmail.com> References: <492D8FD3.8050601@enthought.com> <492DC9B0.1030300@gmail.com> Message-ID: I tried installing 4.0.300x on a machine running 64-bit windows vista home edition and ran into problems with PyQt and some related packages. So I uninstalled all the python-related software, EPD took over 30 minutes to uninstall, and tried to install EPD 4.1 beta. Near the end of the installation of 4.1, I got an error: "There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor." I exit the error dialog and then the installer reports: "EPD Py25 v4.1.30001_beta1 setup ended prematurely because of an error. Your system has not been modified. To install this program at a later time, please run the installation again." If I exit the installer and run it again, I get an option to change, repair or remove the current installation. Repair fails with the same error. Removal seemed to complete without incident. Next I tried installing the 64-bit 2.6 installer from python.org, and I got a message saying I needed to log in as an administrator to install for all users. My account is listed as an adminstrator account, so I don't understand what the problem is. I was able to install 2.6 just for my account, instead of for all users, but I was not able to install EPD 4.1 just for my account, it failed with the same error as it did when I installed for all users. I also tried installing EPD-4.1 with user account controls disabled, but I saw the same error. I'm not a very sophisticated windows user, and I have only had vista for about a week, but I have been able to install software unrelated to python without trouble and I have used python on windows XP in the past to create windows installers for a couple python packages. If this seems like a legitimate bug and there is something more I can do in the way of testing, please let me know. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Sun Nov 30 18:14:06 2008 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 30 Nov 2008 18:14:06 -0500 Subject: [Numpy-discussion] ANNOUNCE: EPD with Py2.5 version 4.0.30002 RC2 available for testing In-Reply-To: References: <492D8FD3.8050601@enthought.com> <492DC9B0.1030300@gmail.com> Message-ID: One more datapoint, I get the same error when I run the EPD installer using msiexec from an administrator command prompt. On Sun, Nov 30, 2008 at 5:00 PM, Darren Dale wrote: > I tried installing 4.0.300x on a machine running 64-bit windows vista home > edition and ran into problems with PyQt and some related packages. So I > uninstalled all the python-related software, EPD took over 30 minutes to > uninstall, and tried to install EPD 4.1 beta. > > Near the end of the installation of 4.1, I got an error: > > "There is a problem with this Windows Installer package. A program run as > part of the setup did not finish as expected. Contact your support personnel > or package vendor." > > I exit the error dialog and then the installer reports: > > "EPD Py25 v4.1.30001_beta1 setup ended prematurely because of an error. > Your system has not been modified. To install this program at a later time, > please run the installation again." > > If I exit the installer and run it again, I get an option to change, repair > or remove the current installation. Repair fails with the same error. > Removal seemed to complete without incident. > > Next I tried installing the 64-bit 2.6 installer from python.org, and I > got a message saying I needed to log in as an administrator to install for > all users. My account is listed as an adminstrator account, so I don't > understand what the problem is. I was able to install 2.6 just for my > account, instead of for all users, but I was not able to install EPD 4.1 > just for my account, it failed with the same error as it did when I > installed for all users. > > I also tried installing EPD-4.1 with user account controls disabled, but I > saw the same error. > > I'm not a very sophisticated windows user, and I have only had vista for > about a week, but I have been able to install software unrelated to python > without trouble and I have used python on windows XP in the past to create > windows installers for a couple python packages. If this seems like a > legitimate bug and there is something more I can do in the way of testing, > please let me know. > > Darren > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sun Nov 30 22:44:10 2008 From: cournape at gmail.com (David Cournapeau) Date: Mon, 1 Dec 2008 12:44:10 +0900 Subject: [Numpy-discussion] ANNOUNCE: EPD with Py2.5 version 4.0.30002 RC2 available for testing In-Reply-To: References: <492D8FD3.8050601@enthought.com> <492DC9B0.1030300@gmail.com> Message-ID: <5b8d13220811301944k7807d3a2w4fcc821255269053@mail.gmail.com> On Mon, Dec 1, 2008 at 7:00 AM, Darren Dale wrote: > I tried installing 4.0.300x on a machine running 64-bit windows vista home > edition and ran into problems with PyQt and some related packages. So I > uninstalled all the python-related software, EPD took over 30 minutes to > uninstall, and tried to install EPD 4.1 beta. My guess is that EPD is only 32 bits installer, so that you run it on WOW (Windows in Windows) on windows 64, which is kind of slow (but usable for most tasks). > Next I tried installing the 64-bit 2.6 installer from python.org, and I got > a message saying I needed to log in as an administrator to install for all > users. My account is listed as an adminstrator account, so I don't > understand what the problem is. I was able to install 2.6 just for my > account, instead of for all users, but I was not able to install EPD 4.1 > just for my account, it failed with the same error as it did when I > installed for all users. You should report those issues on python bug tracker, I think. More generatlly, correct windows installers are a tricky business, specially when you start using dll. In particular, many things are under-documented (dll manifest and co); I know from the python-dev ML that python developers had a hard time solving those issues. Maybe not everything was sorted out, although I am a bit surprised about straight python 2.6 installer. David From dwf at cs.toronto.edu Fri Nov 28 19:07:46 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 28 Nov 2008 19:07:46 -0500 Subject: [Numpy-discussion] [SciPy-user] os x, intel compilers & mkl, and fink python In-Reply-To: <10D66598-1DD4-46D9-BC84-5998E06C01F5@math.toronto.edu> References: <10D66598-1DD4-46D9-BC84-5998E06C01F5@math.toronto.edu> Message-ID: On 28-Nov-08, at 5:38 PM, Gideon Simpson wrote: > Has anyone gotten the combination of OS X with a fink python > distribution to successfully build numpy/scipy with the intel > compilers and the mkl? If so, how'd you do it? IIRC David Cournapeau has had some success building numpy with MKL on OS X, but I doubt it was the fink distribution. Is there a reason you prefer fink's python rather than the Python.org universal framework build? Also, which particular python version (2.4, 2.5, 2.6? I know fink typically has a couple). David