From matthieu.brucher at gmail.com Fri Jun 1 02:22:16 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 1 Jun 2007 08:22:16 +0200 Subject: [Numpy-discussion] ATLAS,LAPACK compilation - help! In-Reply-To: References: <48876.129.199.118.84.1180626918.squirrel@mailgate.phys.ens.fr> <465F02BF.7000206@gmail.com> <465F6DBE.7000802@ar.media.kyoto-u.ac.jp> Message-ID: > > Maybe, maybe not. On 64bit Intel machines running 64bit linux the fedora > package raises an illegal instruction error. Since the fedora package is > based on the debian package this might be a problem on Ubuntu also. For > recent hardware you are probably better off compiling your own from the > latest ATLAS version out there. > Red Hat uses Debian packages ? That sounds odd... FC uses RPM, Debian uses deb packages. The problem with RPM is, as stated by David some time ago, that a lot of info is missing in RPM that is present in deb. What is more, it is known that using a lot of packages repositories leads to incompatibilities and instabilities. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Jun 1 02:26:40 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 01 Jun 2007 15:26:40 +0900 Subject: [Numpy-discussion] ATLAS,LAPACK compilation - help! In-Reply-To: References: <48876.129.199.118.84.1180626918.squirrel@mailgate.phys.ens.fr> <465F02BF.7000206@gmail.com> <465F6DBE.7000802@ar.media.kyoto-u.ac.jp> Message-ID: <465FBC20.5040000@ar.media.kyoto-u.ac.jp> Matthieu Brucher wrote: > > Maybe, maybe not. On 64bit Intel machines running 64bit linux the > fedora package raises an illegal instruction error. Since the > fedora package is based on the debian package this might be a > problem on Ubuntu also. For recent hardware you are probably > better off compiling your own from the latest ATLAS version out > there. > > > Red Hat uses Debian packages ? That sounds odd... FC uses RPM, Debian > uses deb packages. The problem with RPM is, as stated by David some > time ago, that a lot of info is missing in RPM that is present in deb. I don't think I stated that :) > What is more, it is known that using a lot of packages repositories > leads to incompatibilities and instabilities. I think what Harris meant is that the rpm package is based on the work done by the debian packagers. There are official rpm for atlas, I think (not sure, as I do not use Fedora on a regular basis). David From matthieu.brucher at gmail.com Fri Jun 1 02:44:23 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 1 Jun 2007 08:44:23 +0200 Subject: [Numpy-discussion] ATLAS,LAPACK compilation - help! In-Reply-To: <465FBC20.5040000@ar.media.kyoto-u.ac.jp> References: <48876.129.199.118.84.1180626918.squirrel@mailgate.phys.ens.fr> <465F02BF.7000206@gmail.com> <465F6DBE.7000802@ar.media.kyoto-u.ac.jp> <465FBC20.5040000@ar.media.kyoto-u.ac.jp> Message-ID: > > > Red Hat uses Debian packages ? That sounds odd... FC uses RPM, Debian > > uses deb packages. The problem with RPM is, as stated by David some > > time ago, that a lot of info is missing in RPM that is present in deb. > I don't think I stated that :) Well you said, IIRC, that you had troubles making rpm, no ? If that's not the case, my apologies, I must be confusing with another discussion somewhere else :| > What is more, it is known that using a lot of packages repositories > > leads to incompatibilities and instabilities. > I think what Harris meant is that the rpm package is based on the work > done by the debian packagers. There are official rpm for atlas, I think > (not sure, as I do not use Fedora on a regular basis). Oh... OK Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Fri Jun 1 07:24:20 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 01 Jun 2007 14:24:20 +0300 Subject: [Numpy-discussion] are there any numpy equivalents to MATLAB nnz(), sparse(), sparsity()? Message-ID: <466001E4.6070105@ukr.net> are there any numpy equivalents to MATLAB nnz(), sparse(), sparsity()? I didn't see the ones in numpy for MATLAB users page. Thx, D. From openopt at ukr.net Fri Jun 1 07:35:00 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 01 Jun 2007 14:35:00 +0300 Subject: [Numpy-discussion] flatten() without copy - is this possible? Message-ID: <46600464.7010405@ukr.net> hi all. in the numpy for matlab users I read y = x.flatten(1) turn array into vector (note that this forces a copy) Is there any way to do the trick wthout copying? What are the problems here? Just other way of array elements indexing... Thx, D. From stefan at sun.ac.za Fri Jun 1 11:29:54 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 1 Jun 2007 17:29:54 +0200 Subject: [Numpy-discussion] flatindex In-Reply-To: <1073590507.17459.7.camel@ubuntu-laptop> References: <1073590507.17459.7.camel@ubuntu-laptop> Message-ID: <20070601152954.GP12679@mentat.za.net> Hi Tobias On Thu, Jan 08, 2004 at 08:35:07PM +0100, Tobias Knopp wrote: ^^^^^^^^^^^^ Sorry I'm only answering now, but your mail took 3.5 years to arrive ;) > I was looking for a method to find the indices of the smallest element > of an 3-dimensional array a. Therefore i used > > a.argmax() > > The problem was, that argmax gives me a flat index. My question is, if > there is a build-in function to convert the flat index back to a > multidimensional one. I know how to write such a procedure but was > curious if one exists in numpy. numpy.unravel_index: """ Convert a flat index into an index tuple for an array of given shape. e.g. for a 2x2 array, unravel_index(2,(2,2)) returns (1,0). Example usage: p = x.argmax() idx = unravel_index(p,x.shape) x[idx] == x.max() Note: x.flat[p] == x.max() Thus, it may be easier to use flattened indexing than to re-map the index to a tuple. """ Cheers St?fan From bobl at tricity.wsu.edu Fri Jun 1 13:08:46 2007 From: bobl at tricity.wsu.edu (Bob Lewis) Date: Fri, 01 Jun 2007 10:08:46 -0700 Subject: [Numpy-discussion] GPU implementation? In-Reply-To: References: Message-ID: <4660529E.4010109@tricity.wsu.edu> James Turner wrote: > Hi Martin, > > > I was wondering if anyone has thought about accelerating NumPy with a > > GPU. For example nVidia's CUDA SDK provides a feasible way to offload > > vector math onto the very fast SIMD processors available on the GPU. > > Currently GPUs primarily support single precision floats and are not > > IEEE compliant, but still could be useful for some applications. > > I wasn't actually there, but I noticed that last year's SciPy > conference page includes a talk entitled "GpuPy: Using GPUs to > Accelerate NumPy", by Benjamin Eitzen (I think I also found his Web > page via Google): > > http://www.scipy.org/SciPy2006/Schedule > > I also wondered whether Benjamin or anyone else who is interested had > come across the Open Graphics Project (hadn't got around to asking)? Thanks for your interest. Ben and I (mostly Ben, it's his MS thesis) are working on "gpupy" and expect to have a version ready for testing by people other than ourselves some time this summer. (Very) preliminary results are promising. - Bob Lewis School of EECS Washington State University From gary.pajer at gmail.com Fri Jun 1 18:40:04 2007 From: gary.pajer at gmail.com (Gary Pajer) Date: Fri, 1 Jun 2007 18:40:04 -0400 Subject: [Numpy-discussion] Installing from egg ? Message-ID: <88fe22a0706011540s2cebc312ve7ed72391dcb186f@mail.gmail.com> Because I'm a glutton for punishment, I'm starting to install things using easy_install. I'm very new at this, and it's not going well. python 2.5 / WinXP (and Kubuntu, but we'll start with Windows) if I type easy_install numpy stuff downloads, but stops with a prompt to select a compiler by passing -c xxxxx to setup.py. I have Mingw installed, and atlas binaries, and have no trouble building numpy from svn the old fashioned way. Is the behavior up to now expected? How do I pass -c to setup.py via easy_install (misnamed, imho. maybe mho will chage with time) regards, gary From chanley at stsci.edu Sat Jun 2 20:14:54 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Sat, 02 Jun 2007 20:14:54 -0400 Subject: [Numpy-discussion] numpy r3857 build problem Message-ID: <466207FE.4040309@stsci.edu> Hi, I cannot build the latest version of numpy in svn (r3857) on my Intel MacBook running OSX 10.4.9. I'm guessing that the problem is that a fortran compiler isn't found. Since NUMPY doesn't require FORTRAN I found this surprising. Has there been a change in policy? I'm attaching the build log to this message. Cheers, Chris -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: build.log URL: From chanley at stsci.edu Sat Jun 2 20:47:49 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Sat, 02 Jun 2007 20:47:49 -0400 Subject: [Numpy-discussion] numpy r3857 build problem In-Reply-To: <466207FE.4040309@stsci.edu> References: <466207FE.4040309@stsci.edu> Message-ID: <46620FB5.2030007@stsci.edu> Some additional information. I have no problems building numpy on my Redhat Enterprise 3 or Solaris 10 boxes at work. I was able to build numpy there with and without the F77 system variable defined. Interesting. Cheers, Chris Christopher Hanley wrote: > Hi, > > I cannot build the latest version of numpy in svn (r3857) on my Intel > MacBook running OSX 10.4.9. I'm guessing that the problem is that a > fortran compiler isn't found. Since NUMPY doesn't require FORTRAN I > found this surprising. Has there been a change in policy? I'm > attaching the build log to this message. > > Cheers, > Chris From peridot.faceted at gmail.com Sun Jun 3 14:43:27 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 3 Jun 2007 14:43:27 -0400 Subject: [Numpy-discussion] flatten() without copy - is this possible? In-Reply-To: <46600464.7010405@ukr.net> References: <46600464.7010405@ukr.net> Message-ID: On 01/06/07, dmitrey wrote: > y = x.flatten(1) > > turn array into vector (note that this forces a copy) > > Is there any way to do the trick wthout copying? > What are the problems here? Just other way of array elements indexing... It is sometimes possible to flatten an array without copying and sometimes not. For numpy, a vector is a single block of memory in which there are elements of uniform type spaced at a uniform distance. This last is the key; it's called the "stride", and it need not be the same size as an element (so arange(10)[::3] can be created without a copy). A multidimensional array simply has many strides, one for each dimension. Thus ones((10,10,10)) simply keeps track of the stride for a row, the stride for a column, and the stride for a layer. If you want to transpose two axes, the data is not copied, instead the strides are simply exchanged. Under normal circumstances one need not care what the strides are or how the cells are laid out in memory as numpy hides that from normal users. What about flattening an array? It should turn an array into a vector, that is, take an array with n different strides and lengths and create as single array with a single stride and length. The order of the resulting elements needs to be specified; numpy normally defaults to "C order", which means that A[3,4,5] and A[3,4,6] are adjacent in the resulting array but A[3,4,5] and A[4,4,5] are not. (Note that this is a logical operation; the organization of the underlying array is irrelevant for the result.) If you want to ensure that no copy is made, you need to ensure that the stride between elements of the array you're flattening is always the same. Taking a 10-by-10-by-10 array A, the spacing between A[3,4,5] and A[3,4,6] needs to be the same as the spacing between A[3,4,6] and A[3,4,7]. This is automatic. But the spacing also needs to be the same as the spacing between A[3,4,9] and A[3,5,0]. This is not automatic, and often does not occur. In such cases numpy must make a copy to ensure that the resulting array is uniformly strided. What cases *don't* require a copy? Well, let's look at some examples: A = ones((10,10,10)) reshape(A,(-1,)) # No copy needed reshape(A[:,:,:5],(-1,)) # Copy needed reshape(A[:,:,::2],(-1,)) # No copy needed reshape(A[:,::2,:],(-1,)) # Copy needed reshape(A[:5,:,:],(-1,)) # No copy needed reshape(A.transpose(),(-1,)) # Copy needed Note that none of the reindexing operations require a copy, but some of the reshapes do. It turns out to be nontrivial to detect all the cases where a copy can be avoided while reshaping, and IIRC numpy misses some (old versions of numpy almost always copied). But a freshly-created array is normally guaranteed to be reshapable without a copy. If you want to try reshaping an array without a copy, you can try assigning to .shape: In [3]: A = ones((10,10,10))[:,:5,:] In [4]: A.shape = (-1,) --------------------------------------------------------------------------- Traceback (most recent call last) /home/peridot/physics-projects/pulsed-flux/writings/ in () : incompatible shape for a non-contiguous array and In [7]: A = ones((10,10,10))[:5,:,:] In [8]: A.shape = (-1,) Anne From cookedm at physics.mcmaster.ca Sun Jun 3 16:04:32 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sun, 3 Jun 2007 16:04:32 -0400 Subject: [Numpy-discussion] numpy r3857 build problem In-Reply-To: <466207FE.4040309@stsci.edu> References: <466207FE.4040309@stsci.edu> Message-ID: <20070603200432.GA11273@arbutus.physics.mcmaster.ca> On Sat, Jun 02, 2007 at 08:14:54PM -0400, Christopher Hanley wrote: > Hi, > > I cannot build the latest version of numpy in svn (r3857) on my Intel > MacBook running OSX 10.4.9. I'm guessing that the problem is that a > fortran compiler isn't found. Since NUMPY doesn't require FORTRAN I > found this surprising. Has there been a change in policy? I'm > attaching the build log to this message. Fixed in r3858 -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From wbaxter at gmail.com Sun Jun 3 16:05:17 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 4 Jun 2007 05:05:17 +0900 Subject: [Numpy-discussion] are there any numpy equivalents to MATLAB nnz(), sparse(), sparsity()? In-Reply-To: <466001E4.6070105@ukr.net> References: <466001E4.6070105@ukr.net> Message-ID: There is a scipy.sparse package but it seems to be fairly limited currently. Anyway there's definitely nothing like MATLAB's ability to change a matrix to sparse and still use most of the algorithms on it. Good sparse support vs. not so much sparse support should probably be added to the big feature comparison chart on the scipy for MATLAB users page. --bb On 6/1/07, dmitrey wrote: > are there any numpy equivalents to MATLAB nnz(), sparse(), sparsity()? > I didn't see the ones in numpy for MATLAB users page. > Thx, D. > From robert.kern at gmail.com Sun Jun 3 17:45:03 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 03 Jun 2007 16:45:03 -0500 Subject: [Numpy-discussion] Installing from egg ? In-Reply-To: <88fe22a0706011540s2cebc312ve7ed72391dcb186f@mail.gmail.com> References: <88fe22a0706011540s2cebc312ve7ed72391dcb186f@mail.gmail.com> Message-ID: <4663365F.2040902@gmail.com> Gary Pajer wrote: > Because I'm a glutton for punishment, I'm starting to install things > using easy_install. I'm very new at this, and it's not going well. > python 2.5 / WinXP (and Kubuntu, but we'll start with Windows) > > if I type > easy_install numpy > > stuff downloads, but stops with a prompt to select a compiler by > passing -c xxxxx to setup.py. I have Mingw installed, and atlas > binaries, and have no trouble building numpy from svn the old > fashioned way. Is the behavior up to now expected? Yes. You always have to specify mingw32 somewhere if you want to use that compiler. If you ever got numpy to compile with mingw32, you were specifying it some way. > How do I pass -c > to setup.py via easy_install (misnamed, imho. maybe mho will chage > with time) Do you have a pydistutils.cfg file? http://docs.python.org/inst/config-syntax.html Adding this section to it will tell it to use mingw always without prompting the command line. [build_ext] compiler = mingw32 As for configuring ATLAS, you can make a file .numpy-site.cfg in your home directory. I haven't tested this on Windows, though, but if your HOME environment variable is defined, then %HOME%\.numpy-site.cfg should work. It's just a regular site.cfg file as usual. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From davidnovakovic at gmail.com Sun Jun 3 20:08:15 2007 From: davidnovakovic at gmail.com (Dave P. Novakovic) Date: Mon, 4 Jun 2007 10:08:15 +1000 Subject: [Numpy-discussion] GPU implementation? In-Reply-To: <4660529E.4010109@tricity.wsu.edu> References: <4660529E.4010109@tricity.wsu.edu> Message-ID: <59d13e7d0706031708m6777f19cmc882fa053c04bcf3@mail.gmail.com> This may be of interest, LLVM support in Mesa, and i believe there is work doing on with LLVM and python in the pypy camp. http://zrusin.blogspot.com/2007/05/mesa-and-llvm.html I just stumbled on this page, while this conversation was happening :) Dave On 6/2/07, Bob Lewis wrote: > James Turner wrote: > > > Hi Martin, > > > > > I was wondering if anyone has thought about accelerating NumPy with a > > > GPU. For example nVidia's CUDA SDK provides a feasible way to offload > > > vector math onto the very fast SIMD processors available on the GPU. > > > Currently GPUs primarily support single precision floats and are not > > > IEEE compliant, but still could be useful for some applications. > > > > I wasn't actually there, but I noticed that last year's SciPy > > conference page includes a talk entitled "GpuPy: Using GPUs to > > Accelerate NumPy", by Benjamin Eitzen (I think I also found his Web > > page via Google): > > > > http://www.scipy.org/SciPy2006/Schedule > > > > I also wondered whether Benjamin or anyone else who is interested had > > come across the Open Graphics Project (hadn't got around to asking)? > > Thanks for your interest. Ben and I (mostly Ben, it's his MS thesis) > are working on "gpupy" and expect to have a version ready for testing > by people other than ourselves some time this summer. > > (Very) preliminary results are promising. > > - Bob Lewis > School of EECS > Washington State University > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From jl at dmi.dk Mon Jun 4 03:22:47 2007 From: jl at dmi.dk (Jesper Larsen) Date: Mon, 4 Jun 2007 09:22:47 +0200 Subject: [Numpy-discussion] corrcoef of masked array In-Reply-To: <465DB8E6.2080903@gmail.com> References: <200705251037.45071.jl@dmi.dk> <200705301202.14652.jl@dmi.dk> <465DB8E6.2080903@gmail.com> Message-ID: <200706040922.47743.jl@dmi.dk> On Wednesday 30 May 2007 19:48, Robert Kern wrote: > I'm afraid this doesn't work, either. Correlation matrices are constrained > to be positive semidefinite; that is, all of their eigenvalues must be >= > 0. Calculating each of the correlation coefficients in a pairwise fashion > doesn't incorporate this constraint. > > But you're on the right track. My preferred approach to this problem is to > find the pairwise correlation matrix as you did and then find the closest > positive semidefinite matrix to it using the method of alternating > projections. I can't give you the code I wrote for this since it belongs to > a customer, but here is the reference I used: > > http://eprints.ma.man.ac.uk/232/ Robert, Thanks your comments and for the reference. I will try to implement the algorithm sometime this month. Cheers, Jesper From giorgio.luciano at chimica.unige.it Mon Jun 4 04:59:05 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Mon, 04 Jun 2007 10:59:05 +0200 Subject: [Numpy-discussion] signal processing chapter for book Message-ID: <4663D459.8040200@chimica.unige.it> first of all sorry for cross posting As I wrote some time ago we are trying to write a book proposal about the use of python/scipy/numpy in chemometrics and analytical chemistry. Since now I've received positive answer from eight authors and the only "missing" chapter is one about the use of python in digital signal processing (I've contacted some possible authors but since now they are busy). The schedule will not too tight and the chapter doesnt' need to be too long. Hope to hear from you soon Giorgio From a.h.jaffe at bakerjaffe.plus.com Mon Jun 4 10:17:00 2007 From: a.h.jaffe at bakerjaffe.plus.com (Andrew Jaffe) Date: Mon, 04 Jun 2007 15:17:00 +0100 Subject: [Numpy-discussion] flatten() without copy - is this possible? In-Reply-To: <46600464.7010405@ukr.net> References: <46600464.7010405@ukr.net> Message-ID: dmitrey wrote: > hi all. > in the numpy for matlab users I read > > y = x.flatten(1) > > turn array into vector (note that this forces a copy) > > Is there any way to do the trick wthout copying? > What are the problems here? Just other way of array elements indexing... One important question is whether you actually need the new vector, or whether you just want a flat index into the array; if the latter, you can always [I think] use x.flat[one_d_index]. (But note that y=x.flat gives an iterator, not a new array.) Andrew From lxander.m at gmail.com Mon Jun 4 16:17:16 2007 From: lxander.m at gmail.com (Alexander Michael) Date: Mon, 4 Jun 2007 16:17:16 -0400 Subject: [Numpy-discussion] SciPy Journal In-Reply-To: <465E5D58.9030107@ieee.org> References: <465E5D58.9030107@ieee.org> Message-ID: <525f23e80706041317y2d6ba31dqc03c54ebacab4b6a@mail.gmail.com> On 5/31/07, Travis Oliphant wrote: > Hi everybody, > > I'm sorry for the cross posting, but I wanted to reach a wide audience > and I know not everybody subscribes to all the lists. > > I've been thinking more about the "SciPy Journal" that we discussed > before and I have some thoughts. > > 1) I'd like to get it going so that we can push out an electronic issue > after the SciPy conference (in September) > > 2) I think it's scope should be limited to papers that describe > algorithms and code that are in NumPy / SciPy / SciKits. Perhaps we > could also accept papers that describe code that depends on NumPy / > SciPy that is also easily available. > > 3) I'd like to make a requirement for inclusion of new code in SciPy > that it have an associated journal article describing the algorithms, > design approach, etc. I don't see this journal article as being > user-interface documentation for the code. I see this is as a place to > describe why the code is organized as it is and to detail any algorithms > that are used. > > 4) The purpose of the journal as I see it is to > > a) provide someplace to document what is actually done in SciPy and > related software. > b) provide a teaching tool of numerical methods with actual "people > use-it" code that would be > useful to researchers, students, and professionals. > c) hopefully clever new algorithms will be developed for SciPy by > people using Python > that could be show-cased here > d) provide a peer-review publication opportunity for people who > contribute to open-source > software > > 5) We obviously need associate editors and people willing to review > submitted articles as well as people willing to submit articles. I > have two articles that can be submitted within the next two months. > What do other people have? > > > As an example of the kind of thing a SciPy Journal would be useful for. > I have recently over-hauled the interpolation.py file for SciPy by > incorporating the B-spline stuff that is partly in fitpack. In the > process I noticed two things: > > 1) I have (what seems to me) a different recursive algorithm for > calculating derivatives of B-splines than I could find in fitpack. > 2) I have developed a different way to determine the K-1 extra degrees > of freedom for Kth-order spline fitting than I have seen before. > > The SciPy Journal would be a great place to document both of these > things while describing the spline interpolation design of scipy.interpolate > > It is true that I could submit this stuff to other journals, but it > seems like that doing that makes the information harder to find in the > future and not easier. I'm also dissatisfied with how information > exclusionary academic journals seem to be. They are catching up, but > they are still not as accessible as other things available on the internet. > > Given the open nature of most scientific research, it is remarkable that > getting access to the information is not as easy as it should be with > modern search engines (if your internet domain does not subscribe to the > e-journal). > > Comments and feedback is welcome. An implementation oriented journal/newsletter in the vain of RNews () would be great. [Note: I remember seeing some mentions of the R project in various comments, but I am not sure anyone brought RNews as a model. Please excuse me if it was already brought up.] About R News R News is the newsletter of the R project for statistical computing and features short to medium length articles covering topics that might be of interest to users or developers of R, including * Changes in R: new features of the latest release * Changes on CRAN: new add-on packages, manuals, binary distributions, mirrors,... * Add-on packages: short introductions to or reviews of R extension packages * Programmer's Niche: nifty hints for programming in R (or S) * Hints for newcomers: Explaining sides of R that might not be so obvious from reading the manuals and FAQs. * Applications: Examples of analyzing data with R Of course, any write-up of library code should also be distributed with/in the code (doc strings) as well. Such a publication would provide a great outlet for people to write about how they implemented their research and would make a great companion to the publication of the analysis and results. Additionally, the development of a good document template and commendable examples from other contributors would likely encourage better communication as with leading journals. A lot of the material could be culled from the mailing lists and should be written up in a way (and in a format) that would allow it to be dropped into the wiki (e.g. the cookbook page) as well as included in the publication. From bhendrix at enthought.com Mon Jun 4 18:21:28 2007 From: bhendrix at enthought.com (Bryce Hendrix) Date: Mon, 04 Jun 2007 17:21:28 -0500 Subject: [Numpy-discussion] Vista installer? In-Reply-To: References: <465BA6DA.6030700@ar.media.kyoto-u.ac.jp> <465BAC5B.9030001@ar.media.kyoto-u.ac.jp> <465BE8A6.7080909@ar.media.kyoto-u.ac.jp> <465C733C.4060607@astraw.com> Message-ID: <46649068.4000401@enthought.com> I can confirm that our python 2.4 numpy and scipy eggs (and probably matplotlib and others) work with Vista. I was using them for development last week. I haven't tested the Python 2.5 eggs yet, but will do so this week. You can find all of our python 2.4 eggs at http://code.enthought.com/enstaller/eggs bryce Ryan Krauss wrote: > Sorry, I should have mentioned that the student who got Numpy running > in Vista said the key was to right click on the exe and choose "Run as > Administrator". He said that was all there was to it (he also said > the the Python-2.5 msi just installed with no problems). > > If anyone can confirm that this works, please let me know. > > Ryan > From openopt at ukr.net Tue Jun 5 13:06:18 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 05 Jun 2007 20:06:18 +0300 Subject: [Numpy-discussion] flatten() without copy - is this possible? In-Reply-To: References: <46600464.7010405@ukr.net> Message-ID: <4665980A.3050005@ukr.net> Thank you, but all your examples deal with 3-dimensional arrays. and I still misunderstood, is it possible somehow for 2-dimensional arrays or no? D. Anne Archibald wrote: > On 01/06/07, dmitrey wrote: > > >> y = x.flatten(1) >> >> turn array into vector (note that this forces a copy) >> >> Is there any way to do the trick wthout copying? >> What are the problems here? Just other way of array elements indexing... >> > > It is sometimes possible to flatten an array without copying and sometimes not. > > For numpy, a vector is a single block of memory in which there are > elements of uniform type spaced at a uniform distance. This last is > the key; it's called the "stride", and it need not be the same size as > an element (so arange(10)[::3] can be created without a copy). > > A multidimensional array simply has many strides, one for each > dimension. Thus ones((10,10,10)) simply keeps track of the stride for > a row, the stride for a column, and the stride for a layer. If you > want to transpose two axes, the data is not copied, instead the > strides are simply exchanged. Under normal circumstances one need not > care what the strides are or how the cells are laid out in memory as > numpy hides that from normal users. > > What about flattening an array? It should turn an array into a vector, > that is, take an array with n different strides and lengths and create > as single array with a single stride and length. The order of the > resulting elements needs to be specified; numpy normally defaults to > "C order", which means that A[3,4,5] and A[3,4,6] are adjacent in the > resulting array but A[3,4,5] and A[4,4,5] are not. (Note that this is > a logical operation; the organization of the underlying array is > irrelevant for the result.) > > If you want to ensure that no copy is made, you need to ensure that > the stride between elements of the array you're flattening is always > the same. Taking a 10-by-10-by-10 array A, the spacing between > A[3,4,5] and A[3,4,6] needs to be the same as the spacing between > A[3,4,6] and A[3,4,7]. This is automatic. But the spacing also needs > to be the same as the spacing between A[3,4,9] and A[3,5,0]. This is > not automatic, and often does not occur. In such cases numpy must make > a copy to ensure that the resulting array is uniformly strided. > > What cases *don't* require a copy? Well, let's look at some examples: > > A = ones((10,10,10)) > reshape(A,(-1,)) # No copy needed > reshape(A[:,:,:5],(-1,)) # Copy needed > reshape(A[:,:,::2],(-1,)) # No copy needed > reshape(A[:,::2,:],(-1,)) # Copy needed > reshape(A[:5,:,:],(-1,)) # No copy needed > reshape(A.transpose(),(-1,)) # Copy needed > > Note that none of the reindexing operations require a copy, but some > of the reshapes do. > > It turns out to be nontrivial to detect all the cases where a copy can > be avoided while reshaping, and IIRC numpy misses some (old versions > of numpy almost always copied). But a freshly-created array is > normally guaranteed to be reshapable without a copy. > > If you want to try reshaping an array without a copy, you can try > assigning to .shape: > In [3]: A = ones((10,10,10))[:,:5,:] > > In [4]: A.shape = (-1,) > --------------------------------------------------------------------------- > Traceback (most recent call last) > > /home/peridot/physics-projects/pulsed-flux/writings/ > in () > > : incompatible shape for a > non-contiguous array > > and > In [7]: A = ones((10,10,10))[:5,:,:] > > In [8]: A.shape = (-1,) > > > Anne > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > From charlesr.harris at gmail.com Tue Jun 5 14:51:30 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 5 Jun 2007 12:51:30 -0600 Subject: [Numpy-discussion] flatten() without copy - is this possible? In-Reply-To: <4665980A.3050005@ukr.net> References: <46600464.7010405@ukr.net> <4665980A.3050005@ukr.net> Message-ID: On 6/5/07, dmitrey wrote: > > Thank you, but all your examples deal with 3-dimensional arrays. and I > still misunderstood, is it possible somehow for 2-dimensional arrays or > no? > D. There is nothing special about the number of dimensions, all arrays have the same methods.. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From hazelnusse at gmail.com Wed Jun 6 21:55:42 2007 From: hazelnusse at gmail.com (Luke) Date: Wed, 6 Jun 2007 18:55:42 -0700 Subject: [Numpy-discussion] Weird numpy.arange behavoir Message-ID: <99214b470706061855m7ef4fd2ahb8607ba3a5785ebf@mail.gmail.com> I am integrating some equations and need to generate rank 1 time arrays to pass to my integrator. I need them to have the same interval between each entry and have the same number of elements. In matlab this is trivial, and it is in numpy as well, except I'm getting some weird behavoir: import numpy as N T = 0.1 dt = 0.01 k=0 t = N.arange(k*T,(k+1)*T+h,h) Output: array([ 0. , 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1 ]) So far so good. Here is where the problem arises: k=1 t = N.arange(k*T,(k+1)*T+h,h) array([ 0.1 , 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, 0.19, 0.2 , 0.21]) Note that this time array has more entries, and in fact, the last entry is greater than (k+1)*T = (1+1)*0.01 = 2*0.01 = 0.2 Now if it was consistent for all k>0, then it would be fine. However, this is not the case: k=3 t = N.arange(k*T,(k+1)*T+h,h) Output: array([ 0.3 , 0.31, 0.32, 0.33, 0.34, 0.35, 0.36, 0.37, 0.38, 0.39, 0.4 ]) Now, this one has the same number of entries as the case where k=0. Can anybody: 1) Offer a solution to this? 2) Explain this behavoir would occur and ever be desirable? I read the numpy.arange docstring and it says that this may occur, but I don't understand why you would ever want this to occur. Apparently, the length of the returned array is: ceil((stop-start)/step) The weird thing is that in this simple example, (stop-start)/step is always exactly 11, since ((k+1)*T + h - k*T)/(h) = (T+h)/h = (0.1+0.01)/(0.01) = 11.0. In this, there shouldn't be any roundoff error. So in this simple example that was harmlessly constructed (i.e., my period time was an exact integer multiple of my step time), arange behaves undesirably (at least I think it does). After a few tests, I found that if instead of ceil, round was used, then it eliminated my problem, but I don't know if this would have other undesirable effects in other situations. I guess I could use range, but it is just a bit more tedious to code. Thanks, ~Luke From robert.kern at gmail.com Wed Jun 6 23:09:01 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 06 Jun 2007 22:09:01 -0500 Subject: [Numpy-discussion] Weird numpy.arange behavoir In-Reply-To: <99214b470706061855m7ef4fd2ahb8607ba3a5785ebf@mail.gmail.com> References: <99214b470706061855m7ef4fd2ahb8607ba3a5785ebf@mail.gmail.com> Message-ID: <466776CD.2010903@gmail.com> Luke wrote: > I am integrating some equations and need to generate rank 1 time > arrays to pass to my integrator. I need them to have the same > interval between each entry and have the same number of elements. In > matlab this is trivial, and it is in numpy as well, except I'm getting > some weird behavoir: > > import numpy as N > T = 0.1 > dt = 0.01 > > k=0 > t = N.arange(k*T,(k+1)*T+h,h) > > Output: > array([ 0. , 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, > 0.09, 0.1 ]) > > So far so good. > > Here is where the problem arises: > > k=1 > t = N.arange(k*T,(k+1)*T+h,h) > array([ 0.1 , 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, > 0.19, 0.2 , 0.21]) > > Note that this time array has more entries, and in fact, the last > entry is greater than (k+1)*T = (1+1)*0.01 = 2*0.01 = 0.2 > > Now if it was consistent for all k>0, then it would be fine. However, > this is not the case: > > k=3 > t = N.arange(k*T,(k+1)*T+h,h) > > Output: > array([ 0.3 , 0.31, 0.32, 0.33, 0.34, 0.35, 0.36, 0.37, 0.38, > 0.39, 0.4 ]) > > Now, this one has the same number of entries as the case where k=0. > > Can anybody: > 1) Offer a solution to this? Use linspace() instead. > 2) Explain this behavoir would occur and ever be desirable? It's not that it's desirable; it's just unavoidable. > I read the numpy.arange docstring and it says that this may occur, but > I don't understand why you would ever want this to occur. Apparently, > the length of the returned array is: > > ceil((stop-start)/step) > > The weird thing is that in this simple example, (stop-start)/step is > always exactly 11, since ((k+1)*T + h - k*T)/(h) = (T+h)/h = > (0.1+0.01)/(0.01) = 11.0. In this, there shouldn't be any roundoff > error. Yes, there is. Neither 0.1 nor 0.01 are exactly representable in binary floating point. There is roundoff error before you ever get to the actual operations. > So in this simple example that was harmlessly constructed > (i.e., my period time was an exact integer multiple of my step time), > arange behaves undesirably (at least I think it does). > > After a few tests, I found that if instead of ceil, round was used, > then it eliminated my problem, but I don't know if this would have > other undesirable effects in other situations. That just moves the problem elsewhere and is inconsistent with the integer behavior. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jek-cygwin2 at kleckner.net Fri Jun 1 17:51:36 2007 From: jek-cygwin2 at kleckner.net (Jim Kleckner) Date: Fri, 01 Jun 2007 14:51:36 -0700 Subject: [Numpy-discussion] numpy requires tuples for indexing where Numeric allowed them? Message-ID: <466094E8.2070105@kleckner.net> I'm fighting conversion from Numeric to numpy. One change that doesn't seem documented is that I used to be able to select items using lists and now it seems that I have to convert them to tuples. Is that correct and is there a function buried in there that will accept a list for indices? Any reason that item() can't take a list? The weird thing is that it doesn't blow up right away when a list is passed in an array ref but rather returns something I don't expect. I work with lists rather than the implicit tuples of a function call because then I can work with arbitrary dimensions. In the meantime, I guess I can just convert the list to an otherwise unnecessary tuple. Jim From lou_boog2000 at yahoo.com Thu Jun 7 14:40:51 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 7 Jun 2007 11:40:51 -0700 (PDT) Subject: [Numpy-discussion] numpy requires tuples for indexing where Numeric allowed them? In-Reply-To: <466094E8.2070105@kleckner.net> Message-ID: <271639.19822.qm@web34402.mail.mud.yahoo.com> Hi, Jim, Just wondering why you would use item() rather than index in brackets, i.e. a[i] ? The latter works well in numpy. But maybe I'm missing something. -- Lou Pecora --- Jim Kleckner wrote: > I'm fighting conversion from Numeric to numpy. > > One change that doesn't seem documented is that I > used to be able to > select items using lists and now it seems that I > have to convert them to > tuples. Is that correct and is there a function > buried in there that > will accept a list for indices? > > Any reason that item() can't take a list? > > The weird thing is that it doesn't blow up right > away when a list is > passed in an array ref but rather returns something > I don't expect. > > I work with lists rather than the implicit tuples of > a function call > because then I can work with arbitrary dimensions. > > In the meantime, I guess I can just convert the list > to an otherwise > unnecessary tuple. > > Jim -- Lou Pecora, my views are my own. --------------- Great spirits have always encountered violent opposition from mediocre minds. -Albert Einstein ____________________________________________________________________________________ Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, when. http://tv.yahoo.com/collections/222 From lou_boog2000 at yahoo.com Thu Jun 7 14:42:33 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 7 Jun 2007 11:42:33 -0700 (PDT) Subject: [Numpy-discussion] SciPy Journal In-Reply-To: <1180641580.687505.207110@a26g2000pre.googlegroups.com> Message-ID: <81107.19822.qm@web34402.mail.mud.yahoo.com> Luke, I'd love to see that code and the associated article. I do a lot of NLD. --- Luke wrote: > I think this Journal sounds like an excellent idea. > I have some > python code that calculates the Lyapunov > Characteristic Exponents (all > of them), for a dynamical system that I would be > willing to write > about and contribute. -- Lou Pecora, my views are my own. --------------- Great spirits have always encountered violent opposition from mediocre minds. -Albert Einstein ____________________________________________________________________________________ Park yourself in front of a world of choices in alternative vehicles. Visit the Yahoo! Auto Green Center. http://autos.yahoo.com/green_center/ From oliphant.travis at ieee.org Thu Jun 7 15:16:00 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 07 Jun 2007 13:16:00 -0600 Subject: [Numpy-discussion] numpy requires tuples for indexing where Numeric allowed them? In-Reply-To: <466094E8.2070105@kleckner.net> References: <466094E8.2070105@kleckner.net> Message-ID: <46685970.8000702@ieee.org> Jim Kleckner wrote: > I'm fighting conversion from Numeric to numpy. > > One change that doesn't seem documented is that I used to be able to > select items using lists and now it seems that I have to convert them to > tuples. Is that correct and is there a function buried in there that > will accept a list for indices? > The problem is that with the introduction of arbitrary indexing (indirect indexing), lists are treated as sequences of indices instead of index numbers. There is a bit of logic inserted for backward compatibility, but it isn't perfect. What is the code that you are having trouble with? > Any reason that item() can't take a list? > Not that I can think of. It could probably be updated to take a list. > The weird thing is that it doesn't blow up right away when a list is > passed in an array ref but rather returns something I don't expect. > Right, this is the indirect referencing feature that NumPy now supports. -Travis From Glen.Mabey at swri.org Thu Jun 7 17:46:20 2007 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Thu, 7 Jun 2007 16:46:20 -0500 Subject: [Numpy-discussion] .transpose() of memmap array fails to close() Message-ID: <20070607214620.GM6116@bams.ccf.swri.edu> Hello, When assigning a variable that is the transpose() of a memmap array, the ._mmap member doesn't get copied, I guess: In [1]:import numpy In [2]:amemmap = numpy.memmap( '/tmp/afile', dtype=numpy.float32, shape=(4,5), mode='w+' ) In [3]:bmemmap = amemmap.transpose() In [4]:bmemmap.close() --------------------------------------------------------------------------- Traceback (most recent call last) /home/gmabey/src/R9619_dev_acqlibweb/Projects/R9619_NChannelDetection/NED/ in () /usr/local/stow/numpy-20070605_svn-py2.5/lib/python2.5/site-packages/numpy/core/memmap.py in close(self) 86 87 def close(self): ---> 88 self._mmap.close() 89 90 def __del__(self): : 'NoneType' object has no attribute 'close' > /usr/local/stow/numpy-20070605_svn-py2.5/lib/python2.5/site-packages/numpy/core/memmap.py(88)close() 87 def close(self): ---> 88 self._mmap.close() 89 This is an issue when the data is accessed in an order that is different from how it is stored on disk, as: bmemmap = numpy.memmap( '/tmp/afile', dtype=numpy.float32, shape=(4,5), mode='w+' ).transpose() So the object that was originally produced not accessible. I imagine there is some better way to indicate order of dimensions, but regardless, doing In [4]:bmemmap._mmap = amemmap._mmap is a hack workaround. Best regards, Glen Mabey From sebastien.maret at gmail.com Fri Jun 8 10:40:05 2007 From: sebastien.maret at gmail.com (Sebastien Maret) Date: Fri, 08 Jun 2007 10:40:05 -0400 Subject: [Numpy-discussion] numarray.fft causes fatal Python error Message-ID: % python Python 2.5.1 (r251:54863, May 11 2007, 11:07:19) [GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin Type "help", "copyright", "credits" or "license" for more information. from >>> from numarray import * >>> import numarray.fft >>> a = array([1., 0., 1., 0., 1., 0., 1., 0.]) + 10 >>> for i in range(5000): ... b = numarray.fft.fft(a).real ... Fatal Python error: deallocating None Abort trap I am using Numarray 1.5.2. See also: https://sourceforge.net/tracker/?func=detail&atid=450446&aid=1732413&group_id=1369 S?bastien From rex at nosyntax.com Fri Jun 8 20:21:08 2007 From: rex at nosyntax.com (rex) Date: Fri, 8 Jun 2007 17:21:08 -0700 Subject: [Numpy-discussion] Numpy-1.0.3 RPM install failure on SUSE 10.2 Message-ID: <20070609002108.GR20360@x2.nosyntax.com> I don't know if this is the appropriate place for this, but thanks for any pointers to what the problem is. -rex deserv:/ # smart install python-numpy Loading cache... Updating cache... ########################################################## [100%] Computing transaction... Installing packages (3): blas-3.0-958 at i586 lapack-3.0-958 at i586 python-numpy-1.0.3-0.pm.1 at i586 9.6MB of package files are needed. 30.5MB will be used. Confirm changes? (Y/n): y Fetching packages... -> http://download.opensuse.org/distribution/10.2/repo/oss/suse/i586/blas-3.0-958.i586.rpm -> http://packman.inode.at/suse/10.2/i586/python-numpy-1.0.3-0.pm.1.i586.rpm blas-3.0-958.i586.rpm ########################################################## [ 33%] -> http://download.opensuse.org/distribution/10.2/repo/oss/suse/i586/lapack-3.0-958.i586.rpm python-numpy-1.0.3-0.pm.1.i586.rpm ########################################################## [ 66%] lapack-3.0-958.i586.rpm ########################################################## [100%] Committing transaction... Preparing... ########################################################## [ 0%] 1:Installing blas ########################################################## [ 33%] 2:Installing lapack ########################################################## [ 66%] 3:Installing python-numpy ########################################################## [100%] deserv:/ # python Python 2.5 (r25:51908, Nov 27 2006, 19:14:46) [GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import * Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.5/site-packages/numpy/__init__.py", line 43, in import linalg File "/usr/lib/python2.5/site-packages/numpy/linalg/__init__.py", line 4, in from linalg import * File "/usr/lib/python2.5/site-packages/numpy/linalg/linalg.py", line 25, in from numpy.linalg import lapack_lite ImportError: /usr/lib/libblas.so.3: undefined symbol: _gfortran_st_write_done >>> From mathewww at charter.net Fri Jun 8 20:47:48 2007 From: mathewww at charter.net (Mathew Yeates) Date: Fri, 08 Jun 2007 17:47:48 -0700 Subject: [Numpy-discussion] I hate for loops Message-ID: <4669F8B4.4070806@charter.net> Hi I'm looking for a more elegant way of setting my array elements Using "for" loops it would be for i in range(rows): for j in range(cols): N[i,j] = N[i-1][j] + N[i][j-1] - N[i-1][j-1] It's sort of a combined 2d accumulate. Any ideas? Mathew From peridot.faceted at gmail.com Fri Jun 8 21:22:12 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 8 Jun 2007 21:22:12 -0400 Subject: [Numpy-discussion] flatten() without copy - is this possible? In-Reply-To: References: <46600464.7010405@ukr.net> <4665980A.3050005@ukr.net> Message-ID: On 05/06/07, Charles R Harris wrote: > > On 6/5/07, dmitrey wrote: > > Thank you, but all your examples deal with 3-dimensional arrays. and I > > still misunderstood, is it possible somehow for 2-dimensional arrays or > no? > > D. > > There is nothing special about the number of dimensions, all arrays have the > same methods.. Of course. But he was asking whether the examples I was giving, of arrays that could and couldn't be flattened, would work in 2D. There is nothing special about 3D; there are 2D matrices that can be flattened and 2D matrices that can't. Think about the matrix in terms of strides and lengths specifying how the elements are laid out in memory and things should become much clearer. I suspect the numpy book (which is not expensive) does a better job of explaining it. Anne. From charlesr.harris at gmail.com Fri Jun 8 23:19:19 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 8 Jun 2007 21:19:19 -0600 Subject: [Numpy-discussion] I hate for loops In-Reply-To: <4669F8B4.4070806@charter.net> References: <4669F8B4.4070806@charter.net> Message-ID: On 6/8/07, Mathew Yeates wrote: > > Hi > I'm looking for a more elegant way of setting my array elements > Using "for" loops it would be > for i in range(rows): > for j in range(cols): > N[i,j] = N[i-1][j] + N[i][j-1] - N[i-1][j-1] If the initial values of the recursion are in the first row and column you can use the result: N[i,j] = N[0,j] + N[i,0]. It's like the PDE D_xD_y N = 0 whose solution is the sum of two functions f(x) + g(y). It gets more complicated if you are looking for a more general result. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Jun 8 23:21:59 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 8 Jun 2007 21:21:59 -0600 Subject: [Numpy-discussion] I hate for loops In-Reply-To: References: <4669F8B4.4070806@charter.net> Message-ID: On 6/8/07, Charles R Harris wrote: > > > > On 6/8/07, Mathew Yeates wrote: > > > > Hi > > I'm looking for a more elegant way of setting my array elements > > Using "for" loops it would be > > for i in range(rows): > > for j in range(cols): > > N[i,j] = N[i-1][j] + N[i][j-1] - N[i-1][j-1] > > > If the initial values of the recursion are in the first row and column you > can use the result: N[i,j] = N[0,j] + N[i,0]. It's like the PDE D_xD_y N = 0 > whose solution is the sum of two functions f(x) + g(y). It gets more > complicated if you are looking for a more general result. > > Chuck > Make that N[i,j] = N[0,j] + N[i,0] - N[0,0]. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sat Jun 9 00:49:50 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 09 Jun 2007 13:49:50 +0900 Subject: [Numpy-discussion] Numpy-1.0.3 RPM install failure on SUSE 10.2 In-Reply-To: <20070609002108.GR20360@x2.nosyntax.com> References: <20070609002108.GR20360@x2.nosyntax.com> Message-ID: <466A316E.6040000@ar.media.kyoto-u.ac.jp> rex wrote: > I don't know if this is the appropriate place for this, but thanks for > any pointers to what the problem is. > > -rex > > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib/python2.5/site-packages/numpy/__init__.py", line 43, in > import linalg > File "/usr/lib/python2.5/site-packages/numpy/linalg/__init__.py", line 4, in > from linalg import * > File "/usr/lib/python2.5/site-packages/numpy/linalg/linalg.py", line 25, in > from numpy.linalg import lapack_lite > ImportError: /usr/lib/libblas.so.3: undefined symbol: _gfortran_st_write_done > This is certainly because of a mismatch the compiler used for numpy and blas/lapack. I checked the rpm you are using, and it looks like whereas blas and lapack are compiled with gfortran, python-numpy is compiled with g77. I do not use opensuse myself, but started a project to provide binaries for numpy/scipy for this distribution: most of the packages like blas and lapack are broken on this platform (I really wonder if there was any testing of those packages by the packagers). The quickest thing would be to install numpy from sources. If you're willing to help, I have some binary for numpy with my own blas/lapack here, which need some feedback (and updating, too) http://software.opensuse.org/download/home:/ashigabou/openSUSE_10.2/ cheers, David From rex at nosyntax.com Sat Jun 9 04:19:41 2007 From: rex at nosyntax.com (rex) Date: Sat, 9 Jun 2007 01:19:41 -0700 Subject: [Numpy-discussion] Numpy-1.0.3 RPM install failure on SUSE 10.2 In-Reply-To: <466A316E.6040000@ar.media.kyoto-u.ac.jp> References: <20070609002108.GR20360@x2.nosyntax.com> <466A316E.6040000@ar.media.kyoto-u.ac.jp> Message-ID: <20070609081941.GB24079@x2.nosyntax.com> David Cournapeau [2007-06-08 22:49]: > rex wrote: > > I don't know if this is the appropriate place for this, but thanks for > > any pointers to what the problem is. > > > > -rex > > > > Traceback (most recent call last): > > File "", line 1, in > > File "/usr/lib/python2.5/site-packages/numpy/__init__.py", line 43, in > > import linalg > > File "/usr/lib/python2.5/site-packages/numpy/linalg/__init__.py", line 4, in > > from linalg import * > > File "/usr/lib/python2.5/site-packages/numpy/linalg/linalg.py", line 25, in > > from numpy.linalg import lapack_lite > > ImportError: /usr/lib/libblas.so.3: undefined symbol: _gfortran_st_write_done > > > This is certainly because of a mismatch the compiler used for numpy and > blas/lapack. I checked the rpm you are using, and it looks like whereas > blas and lapack are compiled with gfortran, python-numpy is compiled > with g77. I do not use opensuse myself, but started a project to provide > binaries for numpy/scipy for this distribution: most of the packages > like blas and lapack are broken on this platform (I really wonder if > there was any testing of those packages by the packagers). > > The quickest thing would be to install numpy from sources. If you're > willing to help, I have some binary for numpy with my own blas/lapack > here, which need some feedback (and updating, too) > > http://software.opensuse.org/download/home:/ashigabou/openSUSE_10.2/ Thanks for the response. Numpy/SciPy has always been one of the most difficult packages to install I've ever encountered (I've installed hundreds of tarballs over the years). I've been using SUSE since version 6.4, and it's always a battle to get Numpy/SciPy running. I succeed about every other release, but only after many hours of work. Half the time I fail and don't use Numpy/SciPy until the next release. By then I've forgotten the incredible frustration, and try again. I tried to install from source with this result: deserv:/usr/local/src/numpy-1.0.3 # python setup.py install Running from numpy source directory. Traceback (most recent call last): File "setup.py", line 91, in setup_package() File "setup.py", line 61, in setup_package from numpy.distutils.core import setup File "/usr/local/src/numpy-1.0.3/numpy/distutils/core.py", line 24, in from numpy.distutils.command import build_ext File "/usr/local/src/numpy-1.0.3/numpy/distutils/command/build_ext.py", line 16, in from numpy.distutils.system_info import combine_paths File "/usr/local/src/numpy-1.0.3/numpy/distutils/system_info.py", line 159, in so_ext = distutils.sysconfig.get_config_vars('SO')[0] or '' File "/usr/lib/python2.5/distutils/sysconfig.py", line 493, in get_config_vars func() File "/usr/lib/python2.5/distutils/sysconfig.py", line 352, in _init_posix raise DistutilsPlatformError(my_msg) distutils.errors.DistutilsPlatformError: invalid Python installation: unable to open /usr/lib/python2.5/config/Makefile (No such file or +directory) There is no 'config' directory under /usr/lib/python2.5 There is a science repository linked from http://www.scipy.org/Installing_SciPy/Linux http://repos.opensuse.org/science/ I tried to install from the RPMs there, but the recursive dependency chain broke when a gfortran lib that isn't there was required. I also wonder if there was ANY testing by the packagers. http://software.opensuse.org/download/home:/ashigabou Times out. :( Again, thanks for the response. I think I'm going to go do something more pleasant, like lay on a stinging ant nest... -rex From gael.varoquaux at normalesup.org Sat Jun 9 04:34:42 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 9 Jun 2007 10:34:42 +0200 Subject: [Numpy-discussion] Numpy-1.0.3 RPM install failure on SUSE 10.2 In-Reply-To: <20070609081941.GB24079@x2.nosyntax.com> References: <20070609002108.GR20360@x2.nosyntax.com> <466A316E.6040000@ar.media.kyoto-u.ac.jp> <20070609081941.GB24079@x2.nosyntax.com> Message-ID: <20070609083442.GA6743@clipper.ens.fr> On Sat, Jun 09, 2007 at 01:19:41AM -0700, rex wrote: > Thanks for the response. Numpy/SciPy has always been one of the most > difficult packages to install I've ever encountered (I've installed > hundreds of tarballs over the years). I've been using SUSE since version > 6.4, and it's always a battle to get Numpy/SciPy running. I know it doesn't add much to the discussion, but with Ubuntu (or Debian) it an absolute breeze, as the packaging has been well done. Ga?l From david at ar.media.kyoto-u.ac.jp Sat Jun 9 04:48:08 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 09 Jun 2007 17:48:08 +0900 Subject: [Numpy-discussion] Numpy-1.0.3 RPM install failure on SUSE 10.2 In-Reply-To: <20070609081941.GB24079@x2.nosyntax.com> References: <20070609002108.GR20360@x2.nosyntax.com> <466A316E.6040000@ar.media.kyoto-u.ac.jp> <20070609081941.GB24079@x2.nosyntax.com> Message-ID: <466A6948.2040706@ar.media.kyoto-u.ac.jp> rex wrote: > > Thanks for the response. Numpy/SciPy has always been one of the most > difficult packages to install I've ever encountered (I've installed > hundreds of tarballs over the years). I've been using SUSE since version > 6.4, and it's always a battle to get Numpy/SciPy running. I succeed > about every other release, but only after many hours of work. Half the > time I fail and don't use Numpy/SciPy until the next release. By then > I've forgotten the incredible frustration, and try again. This is really Suse fault, honnestly. As Fedora, they have mostly broken blas or lapack, or at least had for a long time. I guess this is because most people using BLAS/LAPACK compile them by themselves, and not that many softwares depend on those; when I started packaging BLAS and LAPACK correctly for suse, I noticed that the g77 compiler has been broken for several months ! Also, if you take a look at the spec file of blas/lapack in suse, and compare them with the work done by the debian packagers, it is obvious why one works flawlessly, and the others do not (it is true that BLAS and LAPACK are difficult to package correctly because they have primitive makefile). I have *never* encountered so many problems on any debian or ubuntu machines. If I had, maybe I won't be using numpy or scipy right now. Using opensuse and fedora has been really, really painful when I started packaging numpy and scipy for those platforms. I would go as far as advising you to install a distribution which packages its software correctly, like debian: you can install it on par with your opensuse by using a chroot jail, or any other mean. > > I tried to install from source with this result: > > deserv:/usr/local/src/numpy-1.0.3 # python setup.py install > Running from numpy source directory. > Traceback (most recent call last): > File "setup.py", line 91, in > setup_package() > File "setup.py", line 61, in setup_package > from numpy.distutils.core import setup > File "/usr/local/src/numpy-1.0.3/numpy/distutils/core.py", line 24, in > from numpy.distutils.command import build_ext > File "/usr/local/src/numpy-1.0.3/numpy/distutils/command/build_ext.py", line 16, in > from numpy.distutils.system_info import combine_paths > File "/usr/local/src/numpy-1.0.3/numpy/distutils/system_info.py", line 159, in > so_ext = distutils.sysconfig.get_config_vars('SO')[0] or '' > File "/usr/lib/python2.5/distutils/sysconfig.py", line 493, in get_config_vars > func() > File "/usr/lib/python2.5/distutils/sysconfig.py", line 352, in _init_posix > raise DistutilsPlatformError(my_msg) > distutils.errors.DistutilsPlatformError: invalid Python installation: unable to open /usr/lib/python2.5/config/Makefile (No such file or > +directory) Did you install python-devel, or something like this ? > > There is no 'config' directory under /usr/lib/python2.5 > > There is a science repository linked from http://www.scipy.org/Installing_SciPy/Linux > > http://repos.opensuse.org/science/ > > I tried to install from the RPMs there, but the recursive dependency > chain broke when a gfortran lib that isn't there was required. I also > wonder if there was ANY testing by the packagers. > > http://software.opensuse.org/download/home:/ashigabou > > Times out. :( Unfortunately, there seems to be a lot of server issues on the service... David From rex at nosyntax.com Sat Jun 9 11:39:19 2007 From: rex at nosyntax.com (rex) Date: Sat, 9 Jun 2007 08:39:19 -0700 Subject: [Numpy-discussion] Numpy-1.0.3 RPM install failure on SUSE 10.2 In-Reply-To: <466A6948.2040706@ar.media.kyoto-u.ac.jp> References: <20070609002108.GR20360@x2.nosyntax.com> <466A316E.6040000@ar.media.kyoto-u.ac.jp> <20070609081941.GB24079@x2.nosyntax.com> <466A6948.2040706@ar.media.kyoto-u.ac.jp> Message-ID: <20070609153918.GB10376@x2.nosyntax.com> David Cournapeau [2007-06-09 06:35]: > rex wrote: > > > > I've been using SUSE since version > > 6.4, and it's always a battle to get Numpy/SciPy running. > > This is really Suse fault, honnestly. As Fedora, they have mostly broken > blas or lapack, or at least had for a long time. Yes, I know. They have released some very broken versions recently. 10.1 was so bad that they remastered it. 10.2 will not install from a SATA CD/DVD without an obscure trick. The bug was reported a number of times, but the priority was set so low -- even after it was pointed out that it is a show-stopper for most people -- that it did not get fixed. Software installs from YAST are extremely tedious. 10.3 is supposed to be better, but who knows? > I would go as far as > advising you to install a distribution which packages its software > correctly, like debian: you can install it on par with your opensuse by > using a chroot jail, or any other mean. > I tried Debian a couple of years ago and the Devil I know (somewhat) seemed a better choice. Perhaps I should try again. > > distutils.errors.DistutilsPlatformError: invalid Python installation: unable to open /usr/lib/python2.5/config/Makefile (No such file or > > +directory) > Did you install python-devel, or something like this ? Ah, I had this problem the last time also, and Robert Kern kindly suggested the same thing. I'd totally forgotten about it. :( > > There is a science repository linked from http://www.scipy.org/Installing_SciPy/Linux > > > > http://repos.opensuse.org/science/ > > > > I tried to install from the RPMs there, but the recursive dependency > > chain broke when a gfortran lib that isn't there was required. I also > > wonder if there was ANY testing by the packagers. > > > > http://software.opensuse.org/download/home:/ashigabou > > > > Times out. :( > > Unfortunately, there seems to be a lot of server issues on the service... Today it responded and I installed Numpy, LAPACK and refblas from there using the Smart package manager. It works. :) Python 2.5 (r25:51908, Nov 27 2006, 19:14:46) [GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test() Found 5 tests for numpy.distutils.misc_util Found 3 tests for numpy.lib.getlimits Found 31 tests for numpy.core.numerictypes Found 32 tests for numpy.linalg Found 13 tests for numpy.core.umath Found 4 tests for numpy.core.scalarmath Found 9 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 188 tests for numpy.core.multiarray Found 3 tests for numpy.fft.helper Found 36 tests for numpy.core.ma Found 1 tests for numpy.lib.ufunclike Found 1 tests for numpy.fft.fftpack Found 12 tests for numpy.lib.twodim_base Found 10 tests for numpy.core.defmatrix Found 4 tests for numpy.ctypeslib Found 41 tests for numpy.lib.function_base Found 2 tests for numpy.lib.polynomial Found 9 tests for numpy.core.records Found 29 tests for numpy.core.numeric Found 4 tests for numpy.lib.index_tricks Found 47 tests for numpy.lib.shape_base Found 0 tests for __main__ ---------------------------------------------------------------------- Ran 526 tests in 0.474s OK >>> Thanks much for the help. Now, I'm going to try to build from source again using MKL. I'd forgotten that I posted what I had to do to make it work on 24 Jan 2007. -rex From rex at nosyntax.com Sat Jun 9 14:00:32 2007 From: rex at nosyntax.com (rex) Date: Sat, 9 Jun 2007 11:00:32 -0700 Subject: [Numpy-discussion] Compiling Numpy with Intel MKL9.1 on SUSE10.2 Message-ID: <20070609180032.GC10376@x2.nosyntax.com> I changed the cc_exe line in numpy-1.0.3/numpy/distutils/intelccompiler.py to: cc_exe = 'icc -msse3 -xP -fast' #Core 2 Duo >From the numpy-1.03 directory executed: python setup.py config --compiler=intel build_clib --compiler=intel build_ext --compiler=intel install --prefix=/usr/local/ [much snipped] compiling C sources C compiler: icc -msse3 -xP -fast creating build/temp.linux-i686-2.5/numpy/fft compile options: '-Inumpy/core/include -Ibuild/src.linux-i686-2.5/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.5 +-c' icc: numpy/fft/fftpack.c icc: numpy/fft/fftpack_litemodule.c creating build/lib.linux-i686-2.5/numpy/fft icc -msse3 -xP -fast -shared build/temp.linux-i686-2.5/numpy/fft/fftpack_litemodule.o build/temp.linux-i686-2.5/numpy/fft/fftpack.o +-L/usr/lib/python2.5/config -lpython2.5 -o build/lib.linux-i686-2.5/numpy/fft/fftpack_lite.so ipo: remark #11000: performing multi-file optimizations ipo: remark #11005: generating object file /tmp/ipo_iccdXKswc.o numpy/fft/fftpack.c(1203): (col. 5) remark: LOOP WAS VECTORIZED. numpy/fft/fftpack.c(293): (col. 9) remark: LOOP WAS VECTORIZED. numpy/fft/fftpack.c(336): (col. 7) remark: LOOP WAS VECTORIZED. numpy/fft/fftpack.c(350): (col. 5) remark: LOOP WAS VECTORIZED. numpy/fft/fftpack.c(1313): (col. 5) remark: LOOP WAS VECTORIZED. numpy/fft/fftpack.c(1440): (col. 5) remark: LOOP WAS VECTORIZED. numpy/fft/fftpack.c(813): (col. 7) remark: LOOP WAS VECTORIZED. numpy/fft/fftpack.c(875): (col. 7) remark: LOOP WAS VECTORIZED. numpy/fft/fftpack.c(912): (col. 7) remark: LOOP WAS VECTORIZED. numpy/fft/fftpack.c(917): (col. 9) remark: LOOP WAS VECTORIZED. numpy/fft/fftpack.c(1447): (col. 5) remark: LOOP WAS VECTORIZED. numpy/fft/fftpack.c(986): (col. 9) remark: LOOP WAS VECTORIZED. numpy/fft/fftpack.c(1072): (col. 7) remark: LOOP WAS VECTORIZED. numpy/fft/fftpack.c(1110): (col. 5) remark: LOOP WAS VECTORIZED. numpy/fft/fftpack.c(1496): (col. 5) remark: LOOP WAS VECTORIZED. numpy/fft/fftpack.c(1433): (col. 5) remark: LOOP WAS VECTORIZED. building 'numpy.linalg.lapack_lite' extension compiling C sources building 'numpy.linalg.lapack_lite' extension compiling C sources C compiler: icc -msse3 -xP -fast creating build/temp.linux-i686-2.5/numpy/linalg compile options: '-DSCIPY_MKL_H -I/opt/intel/mkl/9.1/include -Inumpy/core/include -Ibuild/src.linux-i686-2.5/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.5 -c' icc: numpy/linalg/lapack_litemodule.c creating build/lib.linux-i686-2.5/numpy/linalg icc -msse3 -xP -fast -shared build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o -L/opt/intel/mkl/9.1/lib/32 -L/usr/lib/python2.5/config -lmkl_lapack32 -lmkl_lapack64 -lmkl -lvml -lguide -lpthread -lpython2.5 -o build/lib.linux-i686-2.5/numpy/ linalg/lapack_lite.so IPO link: can not find -lmkl_lapack32 icc: error #10014: problem during multi-file optimization compilation (code 1) IPO link: can not find -lmkl_lapack32 icc: error #10014: problem during multi-file optimization compilation (code 1) error: Command "icc -msse3 -xP -fast -shared build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o -L/opt/intel/mkl/9.1/lib/32 -L/usr/lib/python2.5/config -lmkl_lapack32 -lmkl_lapack64 -lmkl -lvml -lguide -lpthread -lpython2.5 -o build/lib.linux-i686-2.5/numpy/linalg/lapack_lite.so" failed with exit status 1 There is no mkl_lapack32 in /opt/intel/mkl/9.1/lib/32 There are only 'libmkl_lapack.a' and 'libmkl_lapack.so' Likewise, there is no mkl_lapack64 in/opt/intel/mkl/9.1/lib/64 I don't understand where the '-lmkl_lapack32' and '-lmkl_lapack64' are coming from. I looked in lapack_litemodule.c, but didn't see anything about lapack. Thanks for any pointers, -rex From rex at nosyntax.com Sat Jun 9 15:08:15 2007 From: rex at nosyntax.com (rex) Date: Sat, 9 Jun 2007 12:08:15 -0700 Subject: [Numpy-discussion] Compiling Numpy with Intel MKL9.1 on SUSE10.2 (Solved!) In-Reply-To: <20070609180032.GC10376@x2.nosyntax.com> References: <20070609180032.GC10376@x2.nosyntax.com> Message-ID: <20070609190815.GD10376@x2.nosyntax.com> rex [2007-06-09 11:02]: > I changed the cc_exe line in > numpy-1.0.3/numpy/distutils/intelccompiler.py to: > > cc_exe = 'icc -msse3 -xP -fast' #Core 2 Duo > > >From the numpy-1.03 directory executed: > > python setup.py config --compiler=intel build_clib --compiler=intel build_ext --compiler=intel install --prefix=/usr/local/ > > [much snipped] > icc: error #10014: problem during multi-file optimization compilation (code 1) > IPO link: can not find -lmkl_lapack32 > icc: error #10014: problem during multi-file optimization compilation (code 1) > error: Command "icc -msse3 -xP -fast -shared build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o -L/opt/intel/mkl/9.1/lib/32 -L/usr/lib/python2.5/config -lmkl_lapack32 -lmkl_lapack64 -lmkl -lvml -lguide -lpthread -lpython2.5 -o build/lib.linux-i686-2.5/numpy/linalg/lapack_lite.so" failed with exit status 1 > > There is no mkl_lapack32 in /opt/intel/mkl/9.1/lib/32 > There are only 'libmkl_lapack.a' and 'libmkl_lapack.so' > Likewise, there is no mkl_lapack64 in/opt/intel/mkl/9.1/lib/64 A recursive grep on /usr revealed that the file: /usr/local/src/numpy-1.0.3/numpy/distutils/system_info.py has sections that looks for 'mkl_lapack32' and 'mkl_lapack64'. I deleleted the references to 'mkl_lapack64' and changed 'mkl_lapack32' to 'mkl_lapack', and voila, numpy compiles! (after removing the 'build' directory in the numpy root -- there will be errors if stuff is there from a prior attempt) Python 2.5 (r25:51908, Nov 27 2006, 19:14:46) [GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test() Found 5 tests for numpy.distutils.misc_util Found 3 tests for numpy.lib.getlimits Found 31 tests for numpy.core.numerictypes Found 32 tests for numpy.linalg Found 13 tests for numpy.core.umath Found 4 tests for numpy.core.scalarmath Found 9 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 188 tests for numpy.core.multiarray Found 3 tests for numpy.fft.helper Found 36 tests for numpy.core.ma Found 1 tests for numpy.lib.ufunclike Found 1 tests for numpy.fft.fftpack Found 12 tests for numpy.lib.twodim_base Found 10 tests for numpy.core.defmatrix Found 4 tests for numpy.ctypeslib Found 41 tests for numpy.lib.function_base Found 2 tests for numpy.lib.polynomial Found 9 tests for numpy.core.records Found 29 tests for numpy.core.numeric Found 4 tests for numpy.lib.index_tricks Found 47 tests for numpy.lib.shape_base Found 0 tests for __main__ ---------------------------------------------------------------------- Ran 526 tests in 0.570s OK The numpy-1.0.1 RPM version is faster: Ran 526 tests in 0.474s I wonder why that is? -rex From strawman at astraw.com Sat Jun 9 15:09:52 2007 From: strawman at astraw.com (Andrew Straw) Date: Sat, 09 Jun 2007 12:09:52 -0700 Subject: [Numpy-discussion] VMWare Virtual Appliance of Ubuntu with numpy, scipy, matplotlib, and ipython available Message-ID: <466AFB00.9010404@astraw.com> This is a note to announce the availability of a VMWare Virtual Appliance with Ubuntu linux with numpy, scipy, matplotlib, and ipython installed. This should make it relatively easy to try out the software. The VMWare Player and VMWare Server are available for no cost from http://www.vmware.com/products/player/ and http://www.vmware.com/products/server/ The download URL is: http://mosca.caltech.edu/outgoing/Ubuntu%207.04%20for%20scientific%20computing%20in%20Python.zip The username is "ubuntu" and the password is "abc123". The network will share the host's interface using NAT. The md5sum is 4191e13abda1154c94e685ffdc0f829b. I have updated http://scipy.org/Download with this information. From rex at nosyntax.com Sat Jun 9 16:01:02 2007 From: rex at nosyntax.com (rex) Date: Sat, 9 Jun 2007 13:01:02 -0700 Subject: [Numpy-discussion] Editing the SciPy wiki pages Message-ID: <20070609200102.GF10376@x2.nosyntax.com> While compiling Numpy using MKL9.1 is fresh in my mind, I'd like to update some things in the /Installing_SciPy/Linux page. I've registered as a user, but still am not allowed to edit the page. What's required? (I run a couple of Mediawiki sites, so it shouldn't take me long to do simple edits in MoinMoin.) -rex From david at ar.media.kyoto-u.ac.jp Sun Jun 10 01:35:22 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 10 Jun 2007 14:35:22 +0900 Subject: [Numpy-discussion] Numpy-1.0.3 RPM install failure on SUSE 10.2 In-Reply-To: <20070609153918.GB10376@x2.nosyntax.com> References: <20070609002108.GR20360@x2.nosyntax.com> <466A316E.6040000@ar.media.kyoto-u.ac.jp> <20070609081941.GB24079@x2.nosyntax.com> <466A6948.2040706@ar.media.kyoto-u.ac.jp> <20070609153918.GB10376@x2.nosyntax.com> Message-ID: <466B8D9A.4030109@ar.media.kyoto-u.ac.jp> rex wrote: > > I tried Debian a couple of years ago and the Devil I know (somewhat) > seemed a better choice. Perhaps I should try again. Debian installer improved a lot the last few years. Otherwise, try Ubuntu, which is easier to install than mac os x on supported hardware (I am not kidding). Both distributions have correct blas, lapack and testers/timers (I basically copied their scheme and adapted it for the rpms_). They also package atlas correctly. > > Ah, I had this problem the last time also, and Robert Kern kindly > suggested the same thing. I'd totally forgotten about it. :( This one has nothing to do with Suse, at least. When compiling softwares, you have to install a lot of -devel (-dev on debian) packages. This kind of information should be easy to add to the wiki, though. > > Today it responded and I installed Numpy, LAPACK and refblas from there > using the Smart package manager. It works. :) Good to hear. I can update to 1.0.3 if you need it. Note that because it uses the reference blas and lapack, it will be relatively slow compared to say ATLAS or MKL. But I took quite a great care to package blas and lapack with their testers and timers, dynamically linked to blas and lapack, so that you can use them to test other implementations (say ATLAS). David From openopt at ukr.net Sun Jun 10 11:16:51 2007 From: openopt at ukr.net (dmitrey) Date: Sun, 10 Jun 2007 18:16:51 +0300 Subject: [Numpy-discussion] tile() can't handle empty arrays Message-ID: <466C15E3.2060704@ukr.net> in octave repmat([],2,3) works well: ans = [](0x0) I guess in MATLAB too but numpy tile() can't handle it correctly: >>> from numpy import * >>> tile(array([1]), (2,3)) array([[1, 1, 1], [1, 1, 1]]) >>> tile(array([]), (2,3)) Traceback (innermost last): File "", line 1, in File "/usr/lib/python2.5/site-packages/numpy/lib/shape_base.py", line 623, in tile c = c.reshape(-1,n).repeat(nrep,0) ValueError: total size of new array must be unchanged (I have numpy 1.0.1) D. From jstrunk at enthought.com Sun Jun 10 12:10:37 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Sun, 10 Jun 2007 11:10:37 -0500 Subject: [Numpy-discussion] Editing the SciPy wiki pages In-Reply-To: <20070609200102.GF10376@x2.nosyntax.com> References: <20070609200102.GF10376@x2.nosyntax.com> Message-ID: <200706101110.37764.jstrunk@enthought.com> On Saturday 09 June 2007 3:01:02 pm rex wrote: > While compiling Numpy using MKL9.1 is fresh in my mind, I'd like to > update some things in the /Installing_SciPy/Linux page. I've registered > as a user, but still am not allowed to edit the page. What's required? > > (I run a couple of Mediawiki sites, so it shouldn't take me long to do > simple edits in MoinMoin.) > > -rex > Someone needs to add your wiki account to the EditorsGroup. What is your wiki username? Thanks, Jeff From jstrunk at enthought.com Sun Jun 10 12:23:35 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Sun, 10 Jun 2007 11:23:35 -0500 Subject: [Numpy-discussion] Editing the SciPy wiki pages In-Reply-To: <200706101110.37764.jstrunk@enthought.com> References: <20070609200102.GF10376@x2.nosyntax.com> <200706101110.37764.jstrunk@enthought.com> Message-ID: <200706101123.35301.jstrunk@enthought.com> On Sunday 10 June 2007 11:10:37 am Jeff Strunk wrote: > On Saturday 09 June 2007 3:01:02 pm rex wrote: > > While compiling Numpy using MKL9.1 is fresh in my mind, I'd like to > > update some things in the /Installing_SciPy/Linux page. I've registered > > as a user, but still am not allowed to edit the page. What's required? > > > > (I run a couple of Mediawiki sites, so it shouldn't take me long to do > > simple edits in MoinMoin.) > > > > -rex > > Someone needs to add your wiki account to the EditorsGroup. What is your > wiki username? > > Thanks, > Jeff That was incorrect. I checked the config file. I just tested it by registering a new user, logging in, and editing /Installing_SciPy/Linux . A couple of possibilities for why it did not work for you are: * You don't have cookies enabled. * Registering doesn't automatically log you in. Please let me know if either of these is the case or it continues to not let you edit. Thanks, Jeff From wbaxter at gmail.com Sun Jun 10 22:27:33 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 11 Jun 2007 11:27:33 +0900 Subject: [Numpy-discussion] VMWare Virtual Appliance of Ubuntu with numpy, scipy, matplotlib, and ipython available In-Reply-To: <466AFB00.9010404@astraw.com> References: <466AFB00.9010404@astraw.com> Message-ID: For those who are not aware, I have just discovered that the graphics performance of VMWare Player is *MUCH* better than that of VMWare Server. The latter is apparently optimized for disconnected headless operation and access via a network, and so it uses some heavyweight remote protocol for all graphics, while the former is optimized for local, interactive use and basic desktop graphics are quite zippy. On the other hand, VMWare Tools doesn't seem to be available under Player. Fortunately, Player seems to be able to work with VMWare Tools that were installed under Server. Unfortunately Server and Player can't both be installed at the same time. And Player can't create new VM's. So maybe the best bet is to install Server first, create any new VMs you want to create, set them up with the VMWare Tools, then switch to VMWare Player to get the improved graphics performance. VMWare Player can use the vm's created by Server just fine. --bb On 6/10/07, Andrew Straw wrote: > This is a note to announce the availability of a VMWare Virtual > Appliance with Ubuntu linux with numpy, scipy, matplotlib, and ipython > installed. > > This should make it relatively easy to try out the software. The VMWare > Player and VMWare Server are available for no cost from > http://www.vmware.com/products/player/ and > http://www.vmware.com/products/server/ > > The download URL is: > http://mosca.caltech.edu/outgoing/Ubuntu%207.04%20for%20scientific%20computing%20in%20Python.zip > > The username is "ubuntu" and the password is "abc123". The network will > share the host's interface using NAT. The md5sum is > 4191e13abda1154c94e685ffdc0f829b. > > I have updated http://scipy.org/Download with this information. > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From alex.liberzon at gmail.com Mon Jun 11 05:34:50 2007 From: alex.liberzon at gmail.com (Alex) Date: Mon, 11 Jun 2007 09:34:50 -0000 Subject: [Numpy-discussion] VMWare Virtual Appliance of Ubuntu with numpy, scipy, matplotlib, and ipython available In-Reply-To: References: <466AFB00.9010404@astraw.com> Message-ID: <1181554490.244821.88250@p77g2000hsh.googlegroups.com> I run VMWare Player and it works fine - easy to setup and easy to setup a Matlab replacement. I only wonder how to transfer files between the Windows XP and the Ubuntu VM? Thanks, Alex On Jun 10, 9:27 pm, "Bill Baxter" wrote: > For those who are not aware, I have just discovered that the graphics > performance of VMWare Player is *MUCH* better than that of VMWare > Server. The latter is apparently optimized for disconnected headless > operation and access via a network, and so it uses some heavyweight > remote protocol for all graphics, while the former is optimized for > local, interactive use and basic desktop graphics are quite zippy. > > On the other hand, VMWare Tools doesn't seem to be available under > Player. Fortunately, Player seems to be able to work with VMWare > Tools that were installed under Server. > > Unfortunately Server and Player can't both be installed at the same > time. And Player can't create new VM's. So maybe the best bet is to > install Server first, create any new VMs you want to create, set them > up with the VMWare Tools, then switch to VMWare Player to get the > improved graphics performance. VMWare Player can use the vm's created > by Server just fine. > > --bb > > On 6/10/07, Andrew Straw wrote: > > > This is a note to announce the availability of a VMWare Virtual > > Appliance with Ubuntu linux with numpy, scipy, matplotlib, and ipython > > installed. > > > This should make it relatively easy to try out the software. The VMWare > > Player and VMWare Server are available for no cost from > >http://www.vmware.com/products/player/and > >http://www.vmware.com/products/server/ > > > The download URL is: > >http://mosca.caltech.edu/outgoing/Ubuntu%207.04%20for%20scientific%20... > > > The username is "ubuntu" and the password is "abc123". The network will > > share the host's interface using NAT. The md5sum is > > 4191e13abda1154c94e685ffdc0f829b. > > > I have updatedhttp://scipy.org/Downloadwith this information. > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discuss... at scipy.org > >http://projects.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discuss... at scipy.orghttp://projects.scipy.org/mailman/listinfo/numpy-discussion From svetosch at gmx.net Mon Jun 11 06:43:35 2007 From: svetosch at gmx.net (Sven Schreiber) Date: Mon, 11 Jun 2007 11:43:35 +0100 Subject: [Numpy-discussion] VMWare Virtual Appliance of Ubuntu with numpy, scipy, matplotlib, and ipython available In-Reply-To: <1181554490.244821.88250@p77g2000hsh.googlegroups.com> References: <466AFB00.9010404@astraw.com> <1181554490.244821.88250@p77g2000hsh.googlegroups.com> Message-ID: <466D2757.1070102@gmx.net> Alex schrieb: > I run VMWare Player and it works fine - easy to setup and easy to > setup a Matlab replacement. I only wonder how to transfer files > between the Windows XP and the Ubuntu VM? Apart from using usb sticks (AFAIK VMware still has issues with usb 2.0 though) I think the experts usually call a samba or ftp server on the linux side the cleanest solution. The ultimate thing in terms of user friendliness would be if the numpy virtual appliance already had that pre-configured! And now let the experts step forward to announce better and easier solutions... -sven From charlesr.harris at gmail.com Mon Jun 11 11:58:16 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 11 Jun 2007 09:58:16 -0600 Subject: [Numpy-discussion] VMWare Virtual Appliance of Ubuntu with numpy, scipy, matplotlib, and ipython available In-Reply-To: <466D2757.1070102@gmx.net> References: <466AFB00.9010404@astraw.com> <1181554490.244821.88250@p77g2000hsh.googlegroups.com> <466D2757.1070102@gmx.net> Message-ID: On 6/11/07, Sven Schreiber wrote: > > Alex schrieb: > > I run VMWare Player and it works fine - easy to setup and easy to > > setup a Matlab replacement. I only wonder how to transfer files > > between the Windows XP and the Ubuntu VM? > > > Apart from using usb sticks (AFAIK VMware still has issues with usb 2.0 > though) I think the experts usually call a samba or ftp server on the > linux side the cleanest solution. The ultimate thing in terms of user > friendliness would be if the numpy virtual appliance already had that > pre-configured! > > And now let the experts step forward to announce better and easier > solutions... Does *ntfs*-3g work for Ubuntu running on VMware Player? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Mon Jun 11 16:07:14 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 11 Jun 2007 15:07:14 -0500 Subject: [Numpy-discussion] VMWare Virtual Appliance of Ubuntu with numpy, scipy, matplotlib, and ipython available In-Reply-To: <466D2757.1070102@gmx.net> References: <466AFB00.9010404@astraw.com> <1181554490.244821.88250@p77g2000hsh.googlegroups.com> <466D2757.1070102@gmx.net> Message-ID: I was able to share some folders in Windows and then browse to those shared folders from the VMware guest Ubuntu OS. I think there is some default samba stuff already running looking for shared folders. I think I had to set something in the VMware player to say that I trusted the guest OS to allow it to modified my files. It was pretty straight forward (but I am not sitting in front of that computer right now and it isn't booted in Windows right now either, so I can't check the exact details). Ryan On 6/11/07, Sven Schreiber wrote: > Alex schrieb: > > I run VMWare Player and it works fine - easy to setup and easy to > > setup a Matlab replacement. I only wonder how to transfer files > > between the Windows XP and the Ubuntu VM? > > > Apart from using usb sticks (AFAIK VMware still has issues with usb 2.0 > though) I think the experts usually call a samba or ftp server on the > linux side the cleanest solution. The ultimate thing in terms of user > friendliness would be if the numpy virtual appliance already had that > pre-configured! > > And now let the experts step forward to announce better and easier > solutions... > > -sven > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From wbaxter at gmail.com Mon Jun 11 20:18:49 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 12 Jun 2007 09:18:49 +0900 Subject: [Numpy-discussion] VMWare Virtual Appliance of Ubuntu with numpy, scipy, matplotlib, and ipython available In-Reply-To: References: <466AFB00.9010404@astraw.com> <1181554490.244821.88250@p77g2000hsh.googlegroups.com> <466D2757.1070102@gmx.net> Message-ID: The SMB *client* is installed by default in ubuntu. You have to add the smb server separately using apt-get or the synaptic package manager. sudo apt-get install samba After you install the smb server, go to System->Administration->Shared Folders and add a shared folder. Then from a console run sudo smbpasswd -a and after that restart samba sudo /etc/init.d/samba restart At least that's what worked for me. I find it a lot easier to be able to do the sharing/copying from the Windows side using Explorer than the ubuntu side using the Gnome file thing. But YMMV. It's nice to have both pathways set up. On 6/12/07, Ryan Krauss wrote: > I was able to share some folders in Windows and then browse to those > shared folders from the VMware guest Ubuntu OS. I think there is some > default samba stuff already running looking for shared folders. I > think I had to set something in the VMware player to say that I > trusted the guest OS to allow it to modified my files. It was pretty > straight forward (but I am not sitting in front of that computer right > now and it isn't booted in Windows right now either, so I can't check > the exact details). > > Ryan > > On 6/11/07, Sven Schreiber wrote: > > Alex schrieb: > > > I run VMWare Player and it works fine - easy to setup and easy to > > > setup a Matlab replacement. I only wonder how to transfer files > > > between the Windows XP and the Ubuntu VM? > > > > > > Apart from using usb sticks (AFAIK VMware still has issues with usb 2.0 > > though) I think the experts usually call a samba or ftp server on the > > linux side the cleanest solution. The ultimate thing in terms of user > > friendliness would be if the numpy virtual appliance already had that > > pre-configured! > > > > And now let the experts step forward to announce better and easier > > solutions... > > > > -sven > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From fullung at gmail.com Mon Jun 11 20:25:15 2007 From: fullung at gmail.com (Albert Strasheim) Date: Tue, 12 Jun 2007 02:25:15 +0200 Subject: [Numpy-discussion] VMWare Virtual Appliance of Ubuntu with numpy, scipy, matplotlib, and ipython available References: <466AFB00.9010404@astraw.com> Message-ID: <006501c7ac88$25bab030$0100a8c0@sun.ac.za> Hello all ----- Original Message ----- From: "Andrew Straw" To: "Discussion of Numerical Python" Sent: Saturday, June 09, 2007 9:09 PM Subject: [Numpy-discussion] VMWare Virtual Appliance of Ubuntu with numpy, scipy, matplotlib, and ipython available > This is a note to announce the availability of a VMWare Virtual > Appliance with Ubuntu linux with numpy, scipy, matplotlib, and ipython > installed. I'm assuming you ran the test suites and everything went fine? I've set up a 32-bit Windows XP guest inside VMWare Server 1.0.3 on a 64-bit Linux machine and two of the NumPy tests are segfaulting for some strange reason. They are: numpy.core.tests.test_defmatrix.test_casting.check_basic numpy.core.tests.test_numeric.test_dot.check_matmat Do these pass for you? I'm inclined to blame VMWare at this point... Cheers, Albert From fullung at gmail.com Mon Jun 11 21:06:15 2007 From: fullung at gmail.com (Albert Strasheim) Date: Tue, 12 Jun 2007 03:06:15 +0200 Subject: [Numpy-discussion] Intel MKL 9.1 on Windows (was: Re: VMWare Virtual Appliance...) Message-ID: <20070612010615.GA24822@dogbert.sdsl.sun.ac.za> Cancel that. It seems the problems with these two tests are being caused by Intel MKL 9.1 on Windows. However, 9.0 works fine. You basically you have 2 options when linking against MKL on Windows as far as the mkl_libs go. [mkl] include_dirs = C:\Program Files\Intel\MKL\9.1\include library_dirs = C:\Program Files\Intel\MKL\9.1\ia32\lib mkl_libs = mkl_c, libguide40 lapack_libs = mkl_lapack or mkl_libs = mkl_c, libguide I think libguide is the library that contains various thread and OpenMP related bits and pieces. If you link against libguide, you get the following error when running the NumPy tests: OMP abort: Initializing libguide.lib, but found libguide.lib already initialized. This may cause performance degradation and correctness issues. Set environment variable KMP_DUPLICATE_LIB_OK=TRUE to ignore this problem and force the program to continue anyway. Please note that the use of KMP_DUPLICATE_LIB_OK is unsupported and using it may cause undefined behavior. For more information, please contact Intel(R) Premier Support. I think this happens because multiple submodules inside NumPy are linked against this libguide library, but this caused some initialization code to be executed multiple times inside the same process, which shouldn't happen. If one sets KMP_DUPLICATE_LIB_OK=TRUE, the tests actually work with Intel MKL 9.0, but no matter what you do with Intel MKL 9.1, i.e., - link against libguide40 or - link against libguide and don't set KMP_... or - link against libguide and set KMP_... the following tests always segfault: numpy.core.tests.test_defmatrix.test_casting.check_basic numpy.core.tests.test_numeric.test_dot.check_matmat Cheers, Albert On Tue, 12 Jun 2007, Albert Strasheim wrote: > I've set up a 32-bit Windows XP guest inside VMWare Server 1.0.3 on a 64-bit > Linux machine and two of the NumPy tests are segfaulting for some strange > reason. They are: > > numpy.core.tests.test_defmatrix.test_casting.check_basic > numpy.core.tests.test_numeric.test_dot.check_matmat > > Do these pass for you? I'm inclined to blame VMWare at this point... From alex.liberzon at gmail.com Tue Jun 12 03:48:10 2007 From: alex.liberzon at gmail.com (Alex) Date: Tue, 12 Jun 2007 07:48:10 -0000 Subject: [Numpy-discussion] VMWare Virtual Appliance of Ubuntu with numpy, scipy, matplotlib, and ipython available In-Reply-To: References: <466AFB00.9010404@astraw.com> <1181554490.244821.88250@p77g2000hsh.googlegroups.com> <466D2757.1070102@gmx.net> Message-ID: <1181634490.742213.99810@z28g2000prd.googlegroups.com> The instructions here are clear and easy to follow: http://www.spywareinfo.com/articles/vmware/basharing.php Hence, I was able to install the ftp server on Ubuntu and exchange my files in real-time in Windows. I use vsftpd on Ubuntu and Wincp on Windows. best Alex On Jun 11, 7:18 pm, "Bill Baxter" wrote: > The SMB *client* is installed by default in ubuntu. > You have to add the smb server separately using apt-get or the > synaptic package manager. > sudo apt-get install samba > After you install the smb server, go to System->Administration->Shared > Folders and add a shared folder. > Then from a console run > sudo smbpasswd -a > and after that restart samba > sudo /etc/init.d/samba restart > > At least that's what worked for me. > I find it a lot easier to be able to do the sharing/copying from the > Windows side using Explorer than the ubuntu side using the Gnome file > thing. But YMMV. It's nice to have both pathways set up. > > On 6/12/07, Ryan Krauss wrote: > > > I was able to share some folders in Windows and then browse to those > > shared folders from the VMware guest Ubuntu OS. I think there is some > > default samba stuff already running looking for shared folders. I > > think I had to set something in the VMware player to say that I > > trusted the guest OS to allow it to modified my files. It was pretty > > straight forward (but I am not sitting in front of that computer right > > now and it isn't booted in Windows right now either, so I can't check > > the exact details). > > > Ryan > > > On 6/11/07, Sven Schreiber wrote: > > > Alex schrieb: > > > > I run VMWare Player and it works fine - easy to setup and easy to > > > > setup a Matlab replacement. I only wonder how to transfer files > > > > between the Windows XP and the Ubuntu VM? > > > > Apart from using usb sticks (AFAIK VMware still has issues with usb 2.0 > > > though) I think the experts usually call a samba or ftp server on the > > > linux side the cleanest solution. The ultimate thing in terms of user > > > friendliness would be if the numpy virtual appliance already had that > > > pre-configured! > > > > And now let the experts step forward to announce better and easier > > > solutions... > > > > -sven > > > _______________________________________________ > > > Numpy-discussion mailing list > > > Numpy-discuss... at scipy.org > > >http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discuss... at scipy.org > >http://projects.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discuss... at scipy.orghttp://projects.scipy.org/mailman/listinfo/numpy-discussion From jdh2358 at gmail.com Tue Jun 12 12:41:49 2007 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 12 Jun 2007 11:41:49 -0500 Subject: [Numpy-discussion] masked arrays and record arrays Message-ID: <88e473830706120941s18338158lf49e8be1c3585fdc@mail.gmail.com> Do record arrays support masks? JDH From robert.kern at gmail.com Tue Jun 12 12:47:45 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 12 Jun 2007 11:47:45 -0500 Subject: [Numpy-discussion] masked arrays and record arrays In-Reply-To: <88e473830706120941s18338158lf49e8be1c3585fdc@mail.gmail.com> References: <88e473830706120941s18338158lf49e8be1c3585fdc@mail.gmail.com> Message-ID: <466ECE31.3040906@gmail.com> John Hunter wrote: > Do record arrays support masks? I believe so, but not individual masks for each component in the record. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jdh2358 at gmail.com Tue Jun 12 12:56:39 2007 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 12 Jun 2007 11:56:39 -0500 Subject: [Numpy-discussion] masked arrays and record arrays In-Reply-To: <466ECE31.3040906@gmail.com> References: <88e473830706120941s18338158lf49e8be1c3585fdc@mail.gmail.com> <466ECE31.3040906@gmail.com> Message-ID: <88e473830706120956v2909dd67y8707b788bba26c1c@mail.gmail.com> On 6/12/07, Robert Kern wrote: > John Hunter wrote: > > Do record arrays support masks? > > I believe so, but not individual masks for each component in the record. I see, too bad. I am working on the matplotlib.mlab.csv2rec function and need to handle missing values properly. For floats, I can use numpy.nan, but for other types, eg int, it would be nice to be able to set a column dependent mask. I did manage to create a mask on an entire row of a record array, and it is working OK, but there appears to be a bug in repr In [191]: from matplotlib.mlab import csv2rec In [192]: import numpy In [193]: import numpy.core.ma as ma In [194]: r = csv2rec('data/msft.csv') In [195]: mask = numpy.zeros(len(r), numpy.bool_) In [196]: mask[0] = 1 In [197]: m = ma.masked_where(mask, r) In [198]: m[0] Out[198]: array(data = 999999, mask = True, fill_value=999999) In [199]: m[1] Out[199]: (datetime.datetime(2003, 9, 18, 0, 0), 28.489999999999998, 29.510000000000002, 28.420000000000002, 29.5, 67268096, 29.34) In [200]: m Out[200]: ------------------------------------------------------------ Traceback (most recent call last): File "", line 1, in ? File "/home/titan/johnh/dev/lib/python2.4/site-packages/IPython/Prompts.py", line 517, in __call__ manipulated_val = self.display(arg) File "/home/titan/johnh/dev/lib/python2.4/site-packages/IPython/Prompts.py", line 543, in _display return self.shell.hooks.result_display(arg) File "/home/titan/johnh/dev/lib/python2.4/site-packages/IPython/hooks.py", line 134, in __call__ ret = cmd(*args, **kw) File "/home/titan/johnh/dev/lib/python2.4/site-packages/IPython/hooks.py", line 162, in result_display out = pformat(arg) File "/opt/app/g++lib6/python-2.4/lib/python2.4/pprint.py", line 110, in pformat self._format(object, sio, 0, 0, {}, 0) File "/opt/app/g++lib6/python-2.4/lib/python2.4/pprint.py", line 128, in _format rep = self._repr(object, context, level - 1) File "/opt/app/g++lib6/python-2.4/lib/python2.4/pprint.py", line 194, in _repr self._depth, level) File "/opt/app/g++lib6/python-2.4/lib/python2.4/pprint.py", line 206, in format return _safe_repr(object, context, maxlevels, level) File "/opt/app/g++lib6/python-2.4/lib/python2.4/pprint.py", line 291, in _safe_repr rep = repr(object) File "/opt/tradelink/research/site-packages/numpy/core/ma.py", line 760, in __repr__ return with_mask % { File "/opt/tradelink/research/site-packages/numpy/core/ma.py", line 1233, in filled result[m] = value ValueError: tried to set void-array with object members using buffer. From david.trem at gmail.com Tue Jun 12 13:17:27 2007 From: david.trem at gmail.com (David Tremouilles) Date: Tue, 12 Jun 2007 19:17:27 +0200 Subject: [Numpy-discussion] f2py problem under MacOsX: "g95: unrecognized option '-shared'" Message-ID: <129e1cd10706121017s44fe7a19p985a436b8440a05f@mail.gmail.com> Hello, I'm running into trouble with f2py (numpy 1.0.3 on my intel MacOsX) f2py or actually f95 is complaining that -share option is not recognized: "g95: unrecognized option '-shared'" Which is actually true on an OsX platform. It should be something like -dynamic or so that should be used (but I'm not skilled enough to trouble shot this myself :-( )... Does somebody know how to solve the problem? David -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Jun 12 14:24:37 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 12 Jun 2007 13:24:37 -0500 Subject: [Numpy-discussion] f2py problem under MacOsX: "g95: unrecognized option '-shared'" In-Reply-To: <129e1cd10706121017s44fe7a19p985a436b8440a05f@mail.gmail.com> References: <129e1cd10706121017s44fe7a19p985a436b8440a05f@mail.gmail.com> Message-ID: <466EE4E5.4040200@gmail.com> David Tremouilles wrote: > Hello, > > I'm running into trouble with f2py (numpy 1.0.3 on my intel MacOsX) > f2py or actually f95 is complaining that -share option is not recognized: > "g95: unrecognized option '-shared'" > Which is actually true on an OsX platform. It should be something like > -dynamic or so that should be used (but I'm not skilled enough to > trouble shot this myself :-( )... > > Does somebody know how to solve the problem? Exactly which FORTRAN compiler are you using and what --fcompiler setting did you use? We've made the appropriate settings for gfortran (--fcompiler=gnu95), but we haven't done anything with g95 (--fcompiler=g95), which is what I assume you are using here. I think you may be the first person to try to use g95 on OS X with f2py; at least, the first to tell us about it. Consequently, we'll need your help in order to figure out what to do. Presumably, the necessary settings should be similar to those for gfortran. Please take a look at numpy/distutils/fcompiler/gnu.py:GnuFCompiler.get_flags_linker_so(). Most likely, that method can be simply copied over to numpy/distutils/fcompiler/g95.py . Let us know if that works for you. Thanks. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Tue Jun 12 16:12:56 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 12 Jun 2007 16:12:56 -0400 Subject: [Numpy-discussion] Overview of extra build options in svn's numpy.distutils Message-ID: <47DB85E4-0956-4586-ABC9-79588FB58E7D@physics.mcmaster.ca> Hi, Looks like I broke more stuff than I intended when I merged my distutils-revamp branch. Not that I didn't expect something to break, this stuff is fragile! The main purpose of the merge was to allow the user to configure more stuff regarding how things are compiled with Fortran. For instance, here's my ~/.pydistutils.cfg [config_fc] fcompiler=gnu95 f77exec=gfortran-mp-4.2 f90exec=gfortran-mp-4.2 #opt = -g -Wall -O2 f77flags=-g -Wall -O (I use gfortran 4.2 on my MacBook, installed using MacPorts.) Other options to set are listed in numpy/distutils/fcompiler/__init__.py, in the FCompiler class. distutils config_fc key, [environment variable] distutils flags for config_fc section compiler - Fortran compiler to use (the numpy.distutils name, like gnu95 or intel) noopt - don't compile with optimisations noarch - don't compile with host architecture optimisations debug - compile with debug optimisations verbose - spew more stuff to the console when doing distutils stuff executables: f77exec [F77] - executable for Fortran 77 f90exec [F90] - executable for Fortran 90 ldshared [LDSHARED] - executable for shared libraries for Fortran ld [LD] - executable for linker for Fortran ar [AR] - library archive maker (for .a files) ranlib [RANLIB] - some things need ranlib run over the libraries. flags: f77flags [F77FLAGS] - compiler flags for Fortran 77 compiler f90flags [F90FLAGS] - compiler flags for Fortran 90 compiler freeflags [FREEFLAGS] - compiler flags for free-format Fortran 90 opt [FOPT] - optimisation flags for all Fortran compilers (used if noopt is false) arch [FARCH] - architecture-specific flags for Fortran (used if noarch is false) fdebug [FDEBUG] - debug-specific flags for Fortran (used if debug is true) fflags [FFLAGS] - extra compiler flags ldflags [LDFLAGS] - extra linker flags arflags [ARFLAGS] - extra library archiver flags There's also more central logic for finding executables, which should be more flexible, and takes care of, for instance, using the specified F77 compiler for the linker if the linker isn't specified, or the F90 compiler if either isn't, etc. Some of this type of stuff should be done for the C compiler, but isn't, as that would be messier with regards to hooking into Python's distutils. Personally, I think Python's distutils is a poorly laid-out framework, that is in need of serious refactoring. However, that's a lot of work, and I'm not going to do it right away... -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From david.trem at gmail.com Tue Jun 12 17:24:28 2007 From: david.trem at gmail.com (David Tremouilles) Date: Tue, 12 Jun 2007 23:24:28 +0200 Subject: [Numpy-discussion] f2py problem under MacOsX: "g95: unrecognized option '-shared'" In-Reply-To: <466EE4E5.4040200@gmail.com> References: <129e1cd10706121017s44fe7a19p985a436b8440a05f@mail.gmail.com> <466EE4E5.4040200@gmail.com> Message-ID: <129e1cd10706121424j27df5f03j483a109a56a63b8c@mail.gmail.com> ok, I was using G95 (GCC 4.0.3 (g95 0.90!) ... using f2py -fcompiler=g95 ... But I realize gfortran was available on my computer but not recognize by numpy because the executable was "gfortran-mp-4.2". Creating a "gfortran" symbolic link to gfortran-mp-4.2 did the trick and now I can use successfully "f2py -fcompiler=gnu95 ..." So, somehow, my problem is solved... but still f2py did not to work with g95 due to the -share directive so I dig in as you suggested: I was not succesfull just copying the "get_flags_linker_so()" function but in g95.py I have succesfully modified: executables = { 'version_cmd' : ["g95", "--version"], 'compiler_f77' : ["g95", "-ffixed-form"], 'compiler_fix' : ["g95", "-ffixed-form"], 'compiler_f90' : ["g95"], 'linker_so' : ["g95", '-undefined', 'dynamic_lookup', '-bundle'], # ["g95", "-shared"] #here is the mod 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } To make g95 work on my intel macosx computer. Thanks for your help. David David 2007/6/12, Robert Kern : > > David Tremouilles wrote: > > Hello, > > > > I'm running into trouble with f2py (numpy 1.0.3 on my intel MacOsX) > > f2py or actually f95 is complaining that -share option is not > recognized: > > "g95: unrecognized option '-shared'" > > Which is actually true on an OsX platform. It should be something like > > -dynamic or so that should be used (but I'm not skilled enough to > > trouble shot this myself :-( )... > > > > Does somebody know how to solve the problem? > > Exactly which FORTRAN compiler are you using and what --fcompiler setting > did > you use? We've made the appropriate settings for gfortran > (--fcompiler=gnu95), > but we haven't done anything with g95 (--fcompiler=g95), which is what I > assume > you are using here. > > I think you may be the first person to try to use g95 on OS X with f2py; > at > least, the first to tell us about it. Consequently, we'll need your help > in > order to figure out what to do. Presumably, the necessary settings should > be > similar to those for gfortran. Please take a look at > numpy/distutils/fcompiler/gnu.py:GnuFCompiler.get_flags_linker_so(). Most > likely, that method can be simply copied over to > numpy/distutils/fcompiler/g95.py . > > Let us know if that works for you. Thanks. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed Jun 13 10:04:11 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 13 Jun 2007 10:04:11 -0400 Subject: [Numpy-discussion] masked arrays and record arrays In-Reply-To: <88e473830706120956v2909dd67y8707b788bba26c1c@mail.gmail.com> References: <88e473830706120941s18338158lf49e8be1c3585fdc@mail.gmail.com> <466ECE31.3040906@gmail.com> <88e473830706120956v2909dd67y8707b788bba26c1c@mail.gmail.com> Message-ID: <200706131004.11301.pgmdevlist@gmail.com> On Tuesday 12 June 2007 12:56:39 John Hunter wrote: > On 6/12/07, Robert Kern wrote: > > John Hunter wrote: > > > Do record arrays support masks? John, Have you tried mrecords, in the alternative maskedarray package available on the scipy SVN ? It should support masked fields (by opposition to masked records in numpy.core.ma). If not, would you mind giving a test and letting me know your suggestions ? Thanks a lot in advance for any inputs. P. From jdh2358 at gmail.com Wed Jun 13 13:53:38 2007 From: jdh2358 at gmail.com (John Hunter) Date: Wed, 13 Jun 2007 12:53:38 -0500 Subject: [Numpy-discussion] attribute names on record arrays Message-ID: <88e473830706131053s1cd1a6cdo6c81650616a7e735@mail.gmail.com> I fund myself using record arrays more and more, and feature missing is the ability to do tab completion on attribute names in ipython, presumably because you are using a dict under the hood and __getattr__ to resolve o.key where o is a record array and key is a field name. How hard would it be to populate __dict__ with the attribute names so we could tab complete on them? Thanks, JDH From oliphant at ee.byu.edu Wed Jun 13 14:36:08 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 13 Jun 2007 12:36:08 -0600 Subject: [Numpy-discussion] attribute names on record arrays In-Reply-To: <88e473830706131053s1cd1a6cdo6c81650616a7e735@mail.gmail.com> References: <88e473830706131053s1cd1a6cdo6c81650616a7e735@mail.gmail.com> Message-ID: <46703918.3040202@ee.byu.edu> John Hunter wrote: >I fund myself using record arrays more and more, and feature missing >is the ability to do tab completion on attribute names in ipython, >presumably because you are using a dict under the hood and __getattr__ >to resolve > >o.key > >where o is a record array and key is a field name. > >How hard would it be to populate __dict__ with the attribute names so >we could tab complete on them? > > Not hard, in fact somebody suggested a patch that does exactly that. The only question is what impact that might have on other things. For example, I think we would have to make sure that the proper order for fields that conflict with object attributes would be (I'd have to look to remember what the current order is). -Travis From efiring at hawaii.edu Wed Jun 13 15:14:23 2007 From: efiring at hawaii.edu (Eric Firing) Date: Wed, 13 Jun 2007 09:14:23 -1000 Subject: [Numpy-discussion] masked arrays and record arrays In-Reply-To: <200706131004.11301.pgmdevlist@gmail.com> References: <88e473830706120941s18338158lf49e8be1c3585fdc@mail.gmail.com> <466ECE31.3040906@gmail.com> <88e473830706120956v2909dd67y8707b788bba26c1c@mail.gmail.com> <200706131004.11301.pgmdevlist@gmail.com> Message-ID: <4670420F.30108@hawaii.edu> I have made changes in matplotlib svn to facilitate experimentation with the maskedarray module; I hope this will speed up the process of testing it and incorporating it into numpy as a replacement for numpy.core.ma. mpl scripts now accept the switches --maskedarray and --ma to force the use of the corresponding modules when loaded via matplotlib.numerix.ma or matplotlib.numerix.npyma. The latter is a new module intended for internal use in matplotlib as we switch to exclusive use of numpy. It is needed so that people using numerix in their own code can still get whatever numeric package and associated masked array package they want, while internally we can use numpy and either numpy.ma or maskedarray. There is also a new rcParams['maskedarray'] boolean entry (default is False) for selection of the masked array module. It is commented out of matplotlibrc by default, and most mpl users can ignore it. For switching to numpy inside of mpl, the ma import statement is: import matplotlib.numerix.npyma as ma I have not yet actually made this change anywhere; it needs to be made module-by-module as part of the switch to importing numpy directly instead of via numerix. The npyma module and/or the import statement will need to change as maskedarray is moved from being a standalone package to a part of numpy, but this change should be quite easy and painless. Eric Pierre GM wrote: > On Tuesday 12 June 2007 12:56:39 John Hunter wrote: >> On 6/12/07, Robert Kern wrote: >>> John Hunter wrote: >>>> Do record arrays support masks? > > John, > Have you tried mrecords, in the alternative maskedarray package available on > the scipy SVN ? It should support masked fields (by opposition to masked > records in numpy.core.ma). If not, would you mind giving a test and letting > me know your suggestions ? > Thanks a lot in advance for any inputs. > P. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From rex at nosyntax.com Wed Jun 13 17:21:16 2007 From: rex at nosyntax.com (rex) Date: Wed, 13 Jun 2007 14:21:16 -0700 Subject: [Numpy-discussion] Incompatability of svn 3868 distutils with v10.0 Intel compilers and MKL9.1 Message-ID: <20070613212116.GA6404@x2.nosyntax.com> I recently built NumPy & SciPy from svn sources using Intel's icc 10.0, ifort 10.0, and mkl 9.1 under openSUSE 10.2. In the process, I discovered and worked around four bugs (they are just quick hacks, not proper fixes; for example, some of them break compatibilty with earlier versions of Intel compilers and MKL) in NumPy svn 3868 distutils. Bug#1: distutils/system_info.py contains the lines: else: lapack_libs = self.get_libs('lapack_libs',['mkl_lapack32','mkl_lapack64']) This will fail when MKL 9.1 is used because Intel changed the names of both 'mkl_lapack32' and 'mkl_lapack64' to 'mkl_lapack'. My work-around was to change the line to: lapack_libs = self.get_libs('lapack_libs',['mkl_lapack']) Bug#2: distutils/fcompiler/intel.py has several lines like this: 'version_cmd' : ['', None], A work-around posted by George Nurser is to change all such lines to: (may break on Visual compilers) 'version_cmd' : ['', '-V'], Bug#3: distutils/ccompiler.py has the line: compiler_class['intel'] = ('intelccompiler','IntelCCompiler', "Intel C Compiler for 32-bit applications") Unfortunately, in 10.0 Intel changed the result of icc -V to: Intel(R) C Compiler for applications running on IA-32, Version 10.0 Build 20070426 Package ID: l_cc_p_10.0.023 Copyright (C) 1985-2007 Intel Corporation. All rights reserved. FOR NON-COMMERCIAL USE ONLY My work-around was to change the above to: compiler_class['intel'] = ('intelccompiler','IntelCCompiler', "Intel(R) C Compiler for applications running on IA-32") Bug#4: distutils/fcompiler/intel.py has the same id string problem that ccompiler.py had. ifort -V returns: Intel(R) Fortran Compiler for applications running on IA-32, Version 10.0 Build 20070426 Package ID: l_fc_p_10.0.023 I changed version_match = intel_version_match('32-bit') to version_match = intel_version_match('IA-32') #5 This is not a bug, but it's important to change distutils/intelccompiler.py from cc_exe = 'icc' to match the flags for your processor. For my Core 2 Duo, the line below works: cc_exe = 'icc -g -fomit-frame-pointer -xT -fast' After these changes, NumPy and SciPy built successfully, but scipy.test() returns a number of errors. The error reports are on the SciPy-dev list at the end of the thread: "Compiling scipy with Intel ifort & MKL" Perhaps this will help someone... -rex From matthieu.brucher at gmail.com Thu Jun 14 03:17:49 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 14 Jun 2007 09:17:49 +0200 Subject: [Numpy-discussion] Incompatability of svn 3868 distutils with v10.0 Intel compilers and MKL9.1 In-Reply-To: <20070613212116.GA6404@x2.nosyntax.com> References: <20070613212116.GA6404@x2.nosyntax.com> Message-ID: > > cc_exe = 'icc -g -fomit-frame-pointer -xT -fast' > Just some comments on that : - in release mode, you should not use '-g', it slows down the execution of your program - -fast uses -xT for the moment ;) - -fomit-pointer is the default as soon as there is no -O0 or -g Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From rex at nosyntax.com Thu Jun 14 03:54:36 2007 From: rex at nosyntax.com (rex) Date: Thu, 14 Jun 2007 00:54:36 -0700 Subject: [Numpy-discussion] Incompatability of svn 3868 distutils with v10.0 Intel compilers and MKL9.1 In-Reply-To: References: <20070613212116.GA6404@x2.nosyntax.com> Message-ID: <20070614075436.GB3328@x2.nosyntax.com> Matthieu Brucher [2007-06-14 00:39]: > cc_exe = 'icc -g -fomit-frame-pointer -xT -fast' > > > > Just some comments on that : > - in release mode, you should not use '-g', it slows down the execution of your I didn't look it up; it was carried over from an example. I agree. > program > - -fast uses -xT for the moment ;) So it does. I missed it. :( > - -fomit-pointer is the default as soon as there is no -O0 or -g Didn't know this. I hope to put some detailed openSUSE 10.2 specific build instructions from source (both for gcc and Intel software) on the SciPy wiki, and your corrections are helpful. Thanks. -rex -- "Men occasionally stumble over the truth, but most of them pick themselves up and hurry off as if nothing ever happened." --Winston Churchill From matthieu.brucher at gmail.com Thu Jun 14 05:10:11 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 14 Jun 2007 11:10:11 +0200 Subject: [Numpy-discussion] Specifying compiler command line options for numpy.disutils.core Message-ID: Hi, I've been trying to use the Intel C Compiler for some extensions, and as a matter of fact, only numpy.distutils seems to support it... (hope that the next version of setuptools will...) Is it possible to change the compiler command line options in the commandline or in a .cfg file ? For the moment, I have only -shared, I'd like to add -xP for instance. This seems to be related to rex's last mail, but I did not find anywhere a way to specify these options in (numpy.)distutils or setuptools, even though it is available for every other non-Python "library builder". Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.woods at ynic.york.ac.uk Thu Jun 14 08:37:54 2007 From: will.woods at ynic.york.ac.uk (Will Woods) Date: Thu, 14 Jun 2007 13:37:54 +0100 Subject: [Numpy-discussion] randint for long type (permutations) Message-ID: <467136A2.9040707@ynic.york.ac.uk> I want to choose a subset of all possible permutations of a sequence of length N, with each element of the subset unique. This is then going to be scattered across multiple machines using mpi. Since there is a one-to-one mapping between the integers in the range 0 <= x < N! and the possible permutations, one solution would be to choose M < N! integers randomly, check for uniqueness, and then scatter only the integers so that individual nodes can construct the permutations. However the integers need to be of type long, and randint doesn't work for numbers which cannot be converted to int. Any suggestions? From bioinformed at gmail.com Thu Jun 14 08:55:40 2007 From: bioinformed at gmail.com (Kevin Jacobs ) Date: Thu, 14 Jun 2007 08:55:40 -0400 Subject: [Numpy-discussion] randint for long type (permutations) In-Reply-To: <467136A2.9040707@ynic.york.ac.uk> References: <467136A2.9040707@ynic.york.ac.uk> Message-ID: <2e1434c10706140555m19dd61e1q75d0238790bdbba@mail.gmail.com> Call randint until you get enough bits of entropy to for a long with the appropriate number of bits. def randwords(n): result = 0L for i in range(n): result = (result<<32) | randint(0,2<<32-1) return result -Kevin On 6/14/07, Will Woods wrote: > > > I want to choose a subset of all possible permutations of a sequence of > length N, with each element of the subset unique. This is then going to > be scattered across multiple machines using mpi. Since there is a > one-to-one mapping between the integers in the range 0 <= x < N! and the > possible permutations, one solution would be to choose M < N! integers > randomly, check for uniqueness, and then scatter only the integers so > that individual nodes can construct the permutations. However the > integers need to be of type long, and randint doesn't work for numbers > which cannot be converted to int. Any suggestions? > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdh2358 at gmail.com Thu Jun 14 09:19:06 2007 From: jdh2358 at gmail.com (John Hunter) Date: Thu, 14 Jun 2007 08:19:06 -0500 Subject: [Numpy-discussion] masked arrays and record arrays In-Reply-To: <200706131004.11301.pgmdevlist@gmail.com> References: <88e473830706120941s18338158lf49e8be1c3585fdc@mail.gmail.com> <466ECE31.3040906@gmail.com> <88e473830706120956v2909dd67y8707b788bba26c1c@mail.gmail.com> <200706131004.11301.pgmdevlist@gmail.com> Message-ID: <88e473830706140619l20f779f1s1aab7e4ba68f8042@mail.gmail.com> On 6/13/07, Pierre GM wrote: > Have you tried mrecords, in the alternative maskedarray package available on > the scipy SVN ? It should support masked fields (by opposition to masked > records in numpy.core.ma). If not, would you mind giving a test and letting > me know your suggestions ? > Thanks a lot in advance for any inputs. I would be happy to try this out -- do you happen to have an example that shows how to set the masks on the individual fields? JDH From joel.schaerer at creatis.insa-lyon.fr Thu Jun 14 10:22:00 2007 From: joel.schaerer at creatis.insa-lyon.fr (=?ISO-8859-1?Q?Jo=EBl?= Schaerer) Date: Thu, 14 Jun 2007 15:22:00 +0100 Subject: [Numpy-discussion] numpy with python < 2.3? Message-ID: <1181830920.5745.76.camel@localhost.localdomain> Hi there, I'm trying to install numpy on a system I don't control, which has python 2.2.3. When I run python setup.py build, I get: python setup.py build Running from numpy source directory. Traceback (most recent call last): File "setup.py", line 91, in ? setup_package() File "setup.py", line 61, in setup_package from numpy.distutils.core import setup File "numpy/distutils/__init__.py", line 5, in ? import ccompiler File "numpy/distutils/ccompiler.py", line 11, in ? from numpy.distutils import log File "numpy/distutils/log.py", line 4, in ? from distutils.log import * ImportError: No module named log I checked in the documentation and found out that the log module was first introduced in python 2.3. Does that mean numpy is incompatible with python < 2.3? Is there a workaround? joel PS: please CC me in the answer as I have not subscribed (yet!) to the list From fullung at gmail.com Thu Jun 14 10:17:04 2007 From: fullung at gmail.com (Albert Strasheim) Date: Thu, 14 Jun 2007 16:17:04 +0200 Subject: [Numpy-discussion] Incompatability of svn 3868 distutils with v10.0 Intel compilers and MKL9.1 In-Reply-To: References: <20070613212116.GA6404@x2.nosyntax.com> Message-ID: <20070614141704.GA513@dogbert.sdsl.sun.ac.za> Hello all On Thu, 14 Jun 2007, Matthieu Brucher wrote: > > > >cc_exe = 'icc -g -fomit-frame-pointer -xT -fast' > > Just some comments on that : > - in release mode, you should not use '-g', it slows down the execution of > your program Do you have a reference that explains this in more detail? I thought -g just added debug information without changing the generated code? Regards, Albert From cookedm at physics.mcmaster.ca Thu Jun 14 10:27:35 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 14 Jun 2007 10:27:35 -0400 Subject: [Numpy-discussion] Incompatability of svn 3868 distutils with v10.0 Intel compilers and MKL9.1 In-Reply-To: <20070614141704.GA513@dogbert.sdsl.sun.ac.za> References: <20070613212116.GA6404@x2.nosyntax.com> <20070614141704.GA513@dogbert.sdsl.sun.ac.za> Message-ID: <20070614142735.GA26626@arbutus.physics.mcmaster.ca> On Thu, Jun 14, 2007 at 04:17:04PM +0200, Albert Strasheim wrote: > Hello all > > On Thu, 14 Jun 2007, Matthieu Brucher wrote: > > > > > > >cc_exe = 'icc -g -fomit-frame-pointer -xT -fast' > > > > Just some comments on that : > > - in release mode, you should not use '-g', it slows down the execution of > > your program > > Do you have a reference that explains this in more detail? I thought -g > just added debug information without changing the generated code? I had a peek at the icc manual. For icc, -g by itself implies -O0 and -fno-omit-frame-pointer, so it will be slower. However, -g -O2 -fomit-frame-pointer shouldn't be any slower than without the -g. For gcc, -g does what you said. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From matthieu.brucher at gmail.com Thu Jun 14 10:39:56 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 14 Jun 2007 16:39:56 +0200 Subject: [Numpy-discussion] Incompatability of svn 3868 distutils with v10.0 Intel compilers and MKL9.1 In-Reply-To: <20070614141704.GA513@dogbert.sdsl.sun.ac.za> References: <20070613212116.GA6404@x2.nosyntax.com> <20070614141704.GA513@dogbert.sdsl.sun.ac.za> Message-ID: > > Do you have a reference that explains this in more detail? I thought -g > just added debug information without changing the generated code? > My understanding was that there was debug info added - well, in release mode, it is not worth it, impossible to debug -, and these added data can impact performance in jumps, loading times, ... Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Jun 14 11:57:11 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 14 Jun 2007 10:57:11 -0500 Subject: [Numpy-discussion] randint for long type (permutations) In-Reply-To: <467136A2.9040707@ynic.york.ac.uk> References: <467136A2.9040707@ynic.york.ac.uk> Message-ID: <46716557.20100@gmail.com> Will Woods wrote: > I want to choose a subset of all possible permutations of a sequence of > length N, with each element of the subset unique. This is then going to > be scattered across multiple machines using mpi. Since there is a > one-to-one mapping between the integers in the range 0 <= x < N! and the > possible permutations, one solution would be to choose M < N! integers > randomly, check for uniqueness, and then scatter only the integers so > that individual nodes can construct the permutations. However the > integers need to be of type long, and randint doesn't work for numbers > which cannot be converted to int. Any suggestions? Is it too slow or bandwidth-consuming to generate the permutations on the master node and then distribute the permuted arrays to the worker nodes? You can reduce that somewhat by permuting arange(N) as an index array and send that to each of the worker nodes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu Jun 14 11:58:03 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 14 Jun 2007 10:58:03 -0500 Subject: [Numpy-discussion] numpy with python < 2.3? In-Reply-To: <1181830920.5745.76.camel@localhost.localdomain> References: <1181830920.5745.76.camel@localhost.localdomain> Message-ID: <4671658B.1060002@gmail.com> Jo?l Schaerer wrote: > Hi there, > > I'm trying to install numpy on a system I don't control, which has > python 2.2.3. I'm sorry, but numpy does require Python 2.3. There is no workaround. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Thu Jun 14 12:06:03 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 14 Jun 2007 12:06:03 -0400 Subject: [Numpy-discussion] randint for long type (permutations) In-Reply-To: <467136A2.9040707@ynic.york.ac.uk> References: <467136A2.9040707@ynic.york.ac.uk> Message-ID: On 14/06/07, Will Woods wrote: > I want to choose a subset of all possible permutations of a sequence of > length N, with each element of the subset unique. This is then going to > be scattered across multiple machines using mpi. Since there is a > one-to-one mapping between the integers in the range 0 <= x < N! and the > possible permutations, one solution would be to choose M < N! integers > randomly, check for uniqueness, and then scatter only the integers so > that individual nodes can construct the permutations. However the > integers need to be of type long, and randint doesn't work for numbers > which cannot be converted to int. Any suggestions? A single integer might not be the best representation of a permutation (I can't see just now how you encode it, actually). Why not represent it as a tuple of n integers a_i with a_i<=i? (to get a permutation from this, treat a_i as an instruction to swap element i with element a_i). This should be (I haven't proven it but I'm pretty sure) a bijective representation of permutations. Not very compact, though I suppose you could use an array of 8-bit integers for n<256. It will serve as a key in dictionaries, though, and converting from a permutation in some other representation (as returned by argsort?) to this shouldn't be too difficult. Converting these to longs is also not too difficult (sum(a_i[1:]*cumprod(i[1:])) nor is the reverse operation. And of course generating the permutation randomly becomes easy. Also, it's worth noting that if n is even moderately large, n! is such a staggering number that the probability you will ever generate the same permutation twice is less than the probability that your data will be modified undetectably in memory by cosmic rays. Anne From steve at shrogers.com Thu Jun 14 12:33:02 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Thu, 14 Jun 2007 10:33:02 -0600 (MDT) Subject: [Numpy-discussion] numpy with python < 2.3? In-Reply-To: <1181830920.5745.76.camel@localhost.localdomain> References: <1181830920.5745.76.camel@localhost.localdomain> Message-ID: <50246.192.55.12.36.1181838782.squirrel@mail2.webfaction.com> On Thu, June 14, 2007 08:22, Jo?l Schaerer wrote: > > I'm trying to install numpy on a system I don't control, which has > python 2.2.3. When I run python setup.py build, I get: > Perhaps you can install Python in a local direcory, or perhaps a USB drive? # Steve From joel.schaerer at creatis.insa-lyon.fr Thu Jun 14 13:44:00 2007 From: joel.schaerer at creatis.insa-lyon.fr (=?ISO-8859-1?Q?Jo=EBl?= Schaerer) Date: Thu, 14 Jun 2007 18:44:00 +0100 Subject: [Numpy-discussion] numpy with python < 2.3? In-Reply-To: <4671658B.1060002@gmail.com> References: <1181830920.5745.76.camel@localhost.localdomain> <4671658B.1060002@gmail.com> Message-ID: <1181843040.5745.97.camel@localhost.localdomain> Ok, thanks a lot for answering anyways. Maybe this should be documented somewhere? (I don't doubt that it already is, but I looked around for a moment and didn't find it) joel On Thu, 2007-06-14 at 10:58 -0500, Robert Kern wrote: > Jo?l Schaerer wrote: > > Hi there, > > > > I'm trying to install numpy on a system I don't control, which has > > python 2.2.3. > > I'm sorry, but numpy does require Python 2.3. There is no workaround. > From christian at marquardt.sc Thu Jun 14 12:49:43 2007 From: christian at marquardt.sc (Christian Marquardt) Date: Thu, 14 Jun 2007 18:49:43 +0200 (CEST) Subject: [Numpy-discussion] Specifying compiler command line options for numpy.disutils.core In-Reply-To: References: Message-ID: <20369.84.167.72.134.1181839783.squirrel@webmail.marquardt.sc> Hi, I think the default for the standard python distutils is to use the compiler and the compiler settings for the C compiler that were used to build Python itself. There might be ways to specify other compilers; but if you have a shared python library build with one compiler and modules build with another you might run into trouble if the two compilers use different system libraries which are not resolved by standard python build. I believe that numpy is similar in the sense that you can always build additional modules with the compilers that were used to build the numpy core; then, using two fortran based modules (say) will work well because both require the same shared system libraries of the compiler. Probably, the compiler options used to build numpy will also work for your additinal modules (with respect to paths to linear algebra libraries and so on). Again, I think there could be ways to build with different compilers, but you do run the risk of incompatibilities with the shared libraries. Therefore, I have become used to build python with the C-compiler kI'd like to use, even if that means a lot of work. Hope this helps, Chris. On Thu, June 14, 2007 11:10, Matthieu Brucher wrote: > Hi, > > I've been trying to use the Intel C Compiler for some extensions, and as a > matter of fact, only numpy.distutils seems to support it... (hope that the > next version of setuptools will...) > Is it possible to change the compiler command line options in the > commandline or in a .cfg file ? For the moment, I have only -shared, I'd > like to add -xP for instance. > > This seems to be related to rex's last mail, but I did not find anywhere a > way to specify these options in (numpy.)distutils or setuptools, even > though > it is available for every other non-Python "library builder". > > Matthieu > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From matthieu.brucher at gmail.com Thu Jun 14 12:59:57 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 14 Jun 2007 18:59:57 +0200 Subject: [Numpy-discussion] Specifying compiler command line options for numpy.disutils.core In-Reply-To: <20369.84.167.72.134.1181839783.squirrel@webmail.marquardt.sc> References: <20369.84.167.72.134.1181839783.squirrel@webmail.marquardt.sc> Message-ID: > > I think the default for the standard python distutils is to use the > compiler and the compiler settings for the C compiler that were used to > build Python itself. There might be ways to specify other compilers; but > if you have a shared python library build with one compiler and modules > build with another you might run into trouble if the two compilers use > different system libraries which are not resolved by standard python > build. Well, the Intel compiler uses the same libraries than gcc on Linux, and on Windows, I don't know, but it is possible to mix VS2003 and VS2005, whuch is forbidden by the distutils, so I find this too restricting although understandable. I believe that numpy is similar in the sense that you can always build > additional modules with the compilers that were used to build the numpy > core; then, using two fortran based modules (say) will work well because > both require the same shared system libraries of the compiler. Probably, > the compiler options used to build numpy will also work for your > additinal modules (with respect to paths to linear algebra libraries and > so on). No, in this case, I want to build with icc and special compiler options. I tried by build by hand - and CMake - the libraries, it works like a charm and it is very very fast compared to gcc :( Again, I think there could be ways to build with different compilers, but > you do run the risk of incompatibilities with the shared libraries. > Therefore, I have become used to build python with the C-compiler kI'd > like to use, even if that means a lot of work. This would mean building every other modules added - numpy, scipy, matplotlib, wxPython, ... -, doable, but I'd prefer not to do it, but if it is not possible, I would have to live with it... Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From joel.schaerer at creatis.insa-lyon.fr Thu Jun 14 14:33:49 2007 From: joel.schaerer at creatis.insa-lyon.fr (=?ISO-8859-1?Q?Jo=EBl?= Schaerer) Date: Thu, 14 Jun 2007 19:33:49 +0100 Subject: [Numpy-discussion] numpy with python < 2.3? In-Reply-To: <50246.192.55.12.36.1181838782.squirrel@mail2.webfaction.com> References: <1181830920.5745.76.camel@localhost.localdomain> <50246.192.55.12.36.1181838782.squirrel@mail2.webfaction.com> Message-ID: <1181846029.5745.107.camel@localhost.localdomain> On Thu, 2007-06-14 at 10:33 -0600, Steven H. Rogers wrote: > On Thu, June 14, 2007 08:22, Jo?l Schaerer wrote: > > > > I'm trying to install numpy on a system I don't control, which has > > python 2.2.3. When I run python setup.py build, I get: > > > > Perhaps you can install Python in a local direcory, or perhaps a USB drive? > > # Steve That's probably what I'll do if I don't find a better solution. joel From charlesr.harris at gmail.com Thu Jun 14 14:35:22 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 14 Jun 2007 12:35:22 -0600 Subject: [Numpy-discussion] randint for long type (permutations) In-Reply-To: <467136A2.9040707@ynic.york.ac.uk> References: <467136A2.9040707@ynic.york.ac.uk> Message-ID: On 6/14/07, Will Woods wrote: > > > I want to choose a subset of all possible permutations of a sequence of > length N, with each element of the subset unique. This is then going to > be scattered across multiple machines using mpi. Since there is a > one-to-one mapping between the integers in the range 0 <= x < N! and the > possible permutations, one solution would be to choose M < N! integers > randomly, check for uniqueness, and then scatter only the integers so > that individual nodes can construct the permutations. However the > integers need to be of type long, and randint doesn't work for numbers > which cannot be converted to int. Any suggestions? > ________ How big is N? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdh2358 at gmail.com Thu Jun 14 15:02:18 2007 From: jdh2358 at gmail.com (John Hunter) Date: Thu, 14 Jun 2007 14:02:18 -0500 Subject: [Numpy-discussion] problem setting O type for datetime field in record arrays Message-ID: <88e473830706141202v2b54b6cek5d160bd5d6df2ac3@mail.gmail.com> I can create a record array with datetime types using fromrecords if I don't specify a dtype and let numpy determine the dtype. But when I try and set the dtype at record array creation time, I get the error below In [17]: import numpy In [18]: import datetime In [19]: dt = datetime.datetime In [20]: dt1 = dt.now() In [21]: dt2 = dt.now() In [22]: numpy.core.rec.fromrecords([[dt1, 0.5], [dt2, 1.0]], names='date,height') Out[22]: recarray([(datetime.datetime(2007, 6, 14, 13, 59, 20, 753356), 0.5), (datetime.datetime(2007, 6, 14, 13, 59, 22, 946850), 1.0)], dtype=[('date', '|O4'), ('height', '", line 1, in ? File "/home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/records.py", line 399, in fromrecords retval = sb.array(recList, dtype = descr) ValueError: tried to set void-array with object members using buffer. In [24]: numpy.__version__ Out[24]: '1.0.4.dev3868' From david at ar.media.kyoto-u.ac.jp Fri Jun 15 00:01:21 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 15 Jun 2007 13:01:21 +0900 Subject: [Numpy-discussion] Specifying compiler command line options for numpy.disutils.core In-Reply-To: References: <20369.84.167.72.134.1181839783.squirrel@webmail.marquardt.sc> Message-ID: <46720F11.1030307@ar.media.kyoto-u.ac.jp> Matthieu Brucher wrote: > > I think the default for the standard python distutils is to use the > compiler and the compiler settings for the C compiler that were > used to > build Python itself. There might be ways to specify other > compilers; but > if you have a shared python library build with one compiler and > modules > build with another you might run into trouble if the two compilers > use > different system libraries which are not resolved by standard python > build. > > > > Well, the Intel compiler uses the same libraries than gcc on Linux, > and on Windows, I don't know, but it is possible to mix VS2003 and > VS2005, whuch is forbidden by the distutils, so I find this too > restricting although understandable. It is possible to mix object code, but not runtime, which is the problem AFAIK. VS2003 and VS2005 have different C runtimes (msvcrt7.1.dll against msvcrt8.dll). The problem is (at least for me, who just go through the pain for windows users :) ) that VS2003 is not available anymore for free... > > > I believe that numpy is similar in the sense that you can always build > additional modules with the compilers that were used to build the > numpy > core; then, using two fortran based modules (say) will work well > because > both require the same shared system libraries of the compiler. > Probably, > the compiler options used to build numpy will also work for your > additinal modules (with respect to paths to linear algebra > libraries and > so on). > > > > No, in this case, I want to build with icc and special compiler > options. I tried by build by hand - and CMake - the libraries, it > works like a charm and it is very very fast compared to gcc :( Which libraries are you talking about ? Also, beware that ICC uses by default some flags which are potentially dangerous (I don't know if this is true anymore, but ICC used to use the equivalent of --ffast-math of gcc by default: http://david.monniaux.free.fr/dotclear/index.php/2006/03/17/4-l-art-de-calculer-le-minimum-de-deux-nombres). For libraries like atlas, I don't think there will be a huge difference between ICC and gcc; if you use the mkl, then you don't care :) > > > Again, I think there could be ways to build with different > compilers, but > you do run the risk of incompatibilities with the shared libraries. > Therefore, I have become used to build python with the C-compiler kI'd > like to use, even if that means a lot of work. > > > > This would mean building every other modules added - numpy, scipy, > matplotlib, wxPython, ... -, doable, but I'd prefer not to do it, but > if it is not possible, I would have to live with it... I think it is important to separate different issues: object code compatibility, runtime compatibility, etc... Those are different issues. First, mixing ICC compiled code and gcc code *has* to be possible (I have never tried), otherwise, I don't see much use for it under linux. Then you have the problem of runtime services: I really doubt that ICC runtime is not compatible with gcc, and more globally with the GNU runtime (glibc, etc...); actually, ICC used to use the "standard" linux runtime, and I would be surprised if that changed. To say it simply: on linux at least, what should matter is whether the runtime services are compatible (on windows, it looks like they are not: official python is compiled with visual studio 2003, and you cannot use VS 2005; note that mingw seems to work). David From matthieu.brucher at gmail.com Fri Jun 15 02:04:02 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 15 Jun 2007 08:04:02 +0200 Subject: [Numpy-discussion] Specifying compiler command line options for numpy.disutils.core In-Reply-To: <46720F11.1030307@ar.media.kyoto-u.ac.jp> References: <20369.84.167.72.134.1181839783.squirrel@webmail.marquardt.sc> <46720F11.1030307@ar.media.kyoto-u.ac.jp> Message-ID: > > It is possible to mix object code, but not runtime, which is the problem > AFAIK. VS2003 and VS2005 have different C runtimes (msvcrt7.1.dll > against msvcrt8.dll). The problem is (at least for me, who just go > through the pain for windows users :) ) that VS2003 is not available > anymore for free... Exactly. Well, there is an ongoing discussion on Python-dev on this specific point (VS2005 building). > > No, in this case, I want to build with icc and special compiler > > options. I tried by build by hand - and CMake - the libraries, it > > works like a charm and it is very very fast compared to gcc :( Which libraries are you talking about ? Also, beware that ICC uses by > default some flags which are potentially dangerous (I don't know if this > is true anymore, but ICC used to use the equivalent of --ffast-math of > gcc by default: > > http://david.monniaux.free.fr/dotclear/index.php/2006/03/17/4-l-art-de-calculer-le-minimum-de-deux-nombres > ). > For libraries like atlas, I don't think there will be a huge difference > between ICC and gcc; if you use the mkl, then you don't care :) My libraries on manifold learning ;) There is difference in performance because of the -ipo and -xP flags. I have to install the MKL, I just compiled Python from the svn trunk yesterday. > This would mean building every other modules added - numpy, scipy, > > matplotlib, wxPython, ... -, doable, but I'd prefer not to do it, but > > if it is not possible, I would have to live with it... > I think it is important to separate different issues: object code > compatibility, runtime compatibility, etc... Those are different issues. > First, mixing ICC compiled code and gcc code *has* to be possible (I > have never tried), otherwise, I don't see much use for it under linux. Exactly. They are binary compatible (C and C++), they use the same headers, ... it has to be possible. Well, it is possible with numpy.distutils, it only missed to link with stdc++... but no specific compiler options :( Then you have the problem of runtime services: I really doubt that ICC > runtime is not compatible with gcc, and more globally with the GNU > runtime (glibc, etc...); actually, ICC used to use the "standard" linux > runtime, and I would be surprised if that changed. As I said, _my_ problem is that I'd like specific compiler options. To say it simply: on linux at least, what should matter is whether the > runtime services are compatible (on windows, it looks like they are not: > official python is compiled with visual studio 2003, and you cannot use > VS 2005; note that mingw seems to work). Some people reported that it is possible. The only catch seems to be that everything allocated with a runtime should be destroyed by the same allocator. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.woods at ynic.york.ac.uk Fri Jun 15 10:00:31 2007 From: will.woods at ynic.york.ac.uk (Will Woods) Date: Fri, 15 Jun 2007 15:00:31 +0100 Subject: [Numpy-discussion] randint for long type (permutations) In-Reply-To: References: <467136A2.9040707@ynic.york.ac.uk> Message-ID: <46729B7F.1030403@ynic.york.ac.uk> The range of N I need is from 5-100, which spans the highly likely to highly improbable for M around 1000-10000. The permutation can be derived from an integer using the algorithm here: http://en.wikipedia.org/wiki/Permutation I have found the solution to the random number generator in the random module (I had only looked in numpy.random previously!): getrandbits(k) Returns a python long int with k random bits. This method is supplied with the MersenneTwister generator and some other generators may also provide it as an optional part of the API. When available, getrandbits() enables randrange() to handle arbitrarily large ranges. New in version 2.4. Thanks for all the replies. Will Anne Archibald wrote: > On 14/06/07, Will Woods wrote: > >> I want to choose a subset of all possible permutations of a sequence of >> length N, with each element of the subset unique. This is then going to >> be scattered across multiple machines using mpi. Since there is a >> one-to-one mapping between the integers in the range 0 <= x < N! and the >> possible permutations, one solution would be to choose M < N! integers >> randomly, check for uniqueness, and then scatter only the integers so >> that individual nodes can construct the permutations. However the >> integers need to be of type long, and randint doesn't work for numbers >> which cannot be converted to int. Any suggestions? > > A single integer might not be the best representation of a permutation > (I can't see just now how you encode it, actually). Why not represent > it as a tuple of n integers a_i with a_i<=i? (to get a permutation > from this, treat a_i as an instruction to swap element i with element > a_i). This should be (I haven't proven it but I'm pretty sure) a > bijective representation of permutations. Not very compact, though I > suppose you could use an array of 8-bit integers for n<256. It will > serve as a key in dictionaries, though, and converting from a > permutation in some other representation (as returned by argsort?) to > this shouldn't be too difficult. Converting these to longs is also not > too difficult (sum(a_i[1:]*cumprod(i[1:])) nor is the reverse > operation. > > And of course generating the permutation randomly becomes easy. > > Also, it's worth noting that if n is even moderately large, n! is such > a staggering number that the probability you will ever generate the > same permutation twice is less than the probability that your data > will be modified undetectably in memory by cosmic rays. > > Anne > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion -- Will Woods York Neuroimaging Centre The Biocentre York Science Park Innovation Way Heslington York YO10 5DG http://www.ynic.york.ac.uk From christian at marquardt.sc Fri Jun 15 10:49:42 2007 From: christian at marquardt.sc (Christian Marquardt) Date: Fri, 15 Jun 2007 16:49:42 +0200 (CEST) Subject: [Numpy-discussion] Specifying compiler command line options for numpy.disutils.core In-Reply-To: <46720F11.1030307@ar.media.kyoto-u.ac.jp> References: <20369.84.167.72.134.1181839783.squirrel@webmail.marquardt.sc> <46720F11.1030307@ar.media.kyoto-u.ac.jp> Message-ID: <51077.193.17.11.23.1181918982.squirrel@webmail.marquardt.sc> On Fri, June 15, 2007 06:01, David Cournapeau wrote: > I think it is important to separate different issues: object code > compatibility, runtime compatibility, etc... Those are different issues. > First, mixing ICC compiled code and gcc code *has* to be possible (I > have never tried), otherwise, I don't see much use for it under linux. > Then you have the problem of runtime services: I really doubt that ICC > runtime is not compatible with gcc, and more globally with the GNU > runtime (glibc, etc...); actually, ICC used to use the "standard" linux > runtime, and I would be surprised if that changed. Yes, this is possible - icc does use the standard system libraries. But depending on the compiler options, icc will require additional libraries from it's own set of libs. For example, with the -x[...] and -ax[...] options which exploit the floating point pipelines of the Intel cpus, it's using it's very own libsvml (vector math lib or something) which replace some of the math versions in the system lib. If the linker - runtime or static - doesn't know about these, linking will fail. Therefore, if an icc object with certain optimisation is linked with gcc without specifying the required optimised libraries, linking fails. I remember that this also happened for me when building an optimised version of numpy and trying to load it from a gcc-compiled and linked version of Python. Actually, if I remember correctly, this is even a problem for the icc itself; try to link a program from optimised objects with icc without giving the same -x[...] options to the linker... It might be possible that the shared libraries can be told where additional required shared libraries are located (--rpath?), but I was never brave enough to try that out... I simply rebuilt python with the additional benefit that everything in python get faster. Or so ones hopes... It should be straightforward to link gcc objects and shared libs with icc being the linker, though. Has anyone ever tried to build the core python and numpy with icc, but continue to use the standard gcc build extensions? Just a thought... maybe a bit over the top:-(( Chris. From tom.denniston at alum.dartmouth.org Fri Jun 15 12:30:12 2007 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Fri, 15 Jun 2007 11:30:12 -0500 Subject: [Numpy-discussion] attribute names on record arrays In-Reply-To: <46703918.3040202@ee.byu.edu> References: <88e473830706131053s1cd1a6cdo6c81650616a7e735@mail.gmail.com> <46703918.3040202@ee.byu.edu> Message-ID: One thing I've done in situations like this where you want names of dynamic fields to be available for tab completion but the object has other methods and instance variables that might conflict is to use a proxy object that just contains the fields. So for instance you have a property called F that returns an object whose attributes are the fields of the rec array, in this case. Then the api for a rec array with fields "field1" and "field2" simply becomes recArr.F.field1 or recArr.F.field2. In my opinion it partitions the namespaces nicely, avoiding the name collision problems entirely. Because it is a property, it puts off the cost of initializition till you use it, saving those who do not from incurring any cost Finally it works very nicely in ipython because when you type recArr.F. And then tab complete you get only recArray fields, whereas if you type recArr. And then tab complete you get only.the object attributes. There may be other issues I'm not considering or people may not like it, but just wanted to throw the idea out there if anyone thought it was useful. --tom On 6/13/07, Travis Oliphant wrote: > John Hunter wrote: > > >I fund myself using record arrays more and more, and feature missing > >is the ability to do tab completion on attribute names in ipython, > >presumably because you are using a dict under the hood and __getattr__ > >to resolve > > > >o.key > > > >where o is a record array and key is a field name. > > > >How hard would it be to populate __dict__ with the attribute names so > >we could tab complete on them? > > > > > > Not hard, in fact somebody suggested a patch that does exactly that. > > The only question is what impact that might have on other things. For > example, I think we would have to make sure that the proper order for > fields that conflict with object attributes would be (I'd have to look > to remember what the current order is). > > -Travis > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From pgmdevlist at gmail.com Fri Jun 15 13:13:34 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 15 Jun 2007 13:13:34 -0400 Subject: [Numpy-discussion] masked arrays and record arrays In-Reply-To: <88e473830706140619l20f779f1s1aab7e4ba68f8042@mail.gmail.com> References: <88e473830706120941s18338158lf49e8be1c3585fdc@mail.gmail.com> <200706131004.11301.pgmdevlist@gmail.com> <88e473830706140619l20f779f1s1aab7e4ba68f8042@mail.gmail.com> Message-ID: <200706151313.35920.pgmdevlist@gmail.com> On Thursday 14 June 2007 09:19:06 John Hunter wrote: > On 6/13/07, Pierre GM wrote: > > Have you tried mrecords, in the alternative maskedarray package available > > on the scipy SVN ? > I would be happy to try this out -- do you happen to have an example > that shows how to set the masks on the individual fields? > > JDH John, Sorry for the delayed answer: I had to fix a couple of minor bugs here and there. The easiest would be something along those lines: #------------------ import numpy as N import maskedarray as MA import maskedarray.mrecords as MR x = [(1.,10.,'a'),(2.,20,'b'),(3.14,30,'c'),(5.55,40,'d')] desc = [('ffloat', N.float_), ('fint', N.int_), ('fstr', 'S10')] mr = MR.fromrecords(x,dtype=desc) "Set the mask on a record" mr[0] = MA.masked "Set the mask on a field" mr.ffloat[-1] = MA.masked print mr #------------------ Another example is provided in the mrecords.py file. Please give it a try, I'm looking forward to your feedback. Pierre PS: If you're working w/ datetime objects, you might also be interested in the TimeSeries package, also available on the SVN. The tmulti subpackage defines a MultiTimeSeries object, which is a mixture of TimeSeries and MaskedRecords. This is just a rough prototype, but that could be a start. From robert.kern at gmail.com Fri Jun 15 13:52:22 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 15 Jun 2007 12:52:22 -0500 Subject: [Numpy-discussion] randint for long type (permutations) In-Reply-To: <46729B7F.1030403@ynic.york.ac.uk> References: <467136A2.9040707@ynic.york.ac.uk> <46729B7F.1030403@ynic.york.ac.uk> Message-ID: <4672D1D6.10203@gmail.com> Will Woods wrote: > > The range of N I need is from 5-100, which spans the highly likely to > highly improbable for M around 1000-10000. The permutation can be > derived from an integer using the algorithm here: > http://en.wikipedia.org/wiki/Permutation You really, really don't want to do it this way. 100! is a huge number that you *cannot* sample effectively. With the permutation algorithm given here, only the first fraction of the sequence will be shuffled at all. In [29]: def perm(k, s): fact = 1 for j in range(2, len(s)+1): fact *= j-1 i = j - ((k//fact) % j) - 1 tmp = s[i], s[j-1] s[j-1], s[i] = tmp return s ....: In [37]: s0 = range(100) In [38]: print s0 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99] In [39]: print perm(10000000000000, s0[:]) [9, 12, 11, 4, 13, 14, 8, 7, 15, 5, 10, 0, 3, 1, 2, 6, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99] Please, take my advice and shuffle the sequences on the master node using numpy.random.permutation() and distribute the sequences among the worker nodes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Jun 15 14:03:37 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 15 Jun 2007 13:03:37 -0500 Subject: [Numpy-discussion] randint for long type (permutations) In-Reply-To: <46729B7F.1030403@ynic.york.ac.uk> References: <467136A2.9040707@ynic.york.ac.uk> <46729B7F.1030403@ynic.york.ac.uk> Message-ID: <4672D479.6000509@gmail.com> Will Woods wrote: > > The range of N I need is from 5-100, which spans the highly likely to > highly improbable for M around 1000-10000. The permutation can be > derived from an integer using the algorithm here: > http://en.wikipedia.org/wiki/Permutation Actually, I take it back. It's not as bad as I thought it was. However, instead of cobbling together the bits, just use random.randrange(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Fri Jun 15 14:32:44 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 15 Jun 2007 11:32:44 -0700 Subject: [Numpy-discussion] Specifying compiler command line options for numpy.disutils.core In-Reply-To: <46720F11.1030307@ar.media.kyoto-u.ac.jp> References: <20369.84.167.72.134.1181839783.squirrel@webmail.marquardt.sc> <46720F11.1030307@ar.media.kyoto-u.ac.jp> Message-ID: <4672DB4C.3010507@noaa.gov> David Cournapeau wrote: > The problem is (at least for me, who just go > through the pain for windows users :) ) that VS2003 is not available > anymore for free... while MS isn't distributing it, there area lot of copies floating around, and I don't think it's illegal to distribute them (anyone know different?) However, I found that it's a pain to set up distutils to use the free version -- at least with python2.5, using MinGW is much easier. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From lbolla at gmail.com Mon Jun 11 08:11:12 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Mon, 11 Jun 2007 14:11:12 +0200 Subject: [Numpy-discussion] f2py and type construct Message-ID: <80c99e790706110511mc456daes2fe664e195be2aaf@mail.gmail.com> hi all. I'm trying to compile an F90 source file with f2py, but it fails with the construct "type ... end type". here is an example: -------------------- ! file test19.f90 module basic implicit none save integer, parameter :: ciao = 17 end module basic module basic2 implicit none save type test_t integer :: x end type test_t type(test_t) :: ciao end module basic2 ---------------------- $>f2py -c test19.f90 -m test --fcompiler=intele --compiler=intel (I'm compiling on an SGI Altix) and this is the error message: ---------------------- running build running config_fc running build_src building extension "test" sources f2py options: [] f2py:> /tmp/tmprBrnf7/src.linux-ia64-2.5/testmodule.c creating /tmp/tmprBrnf7 creating /tmp/tmprBrnf7/src.linux-ia64-2.5 Reading fortran codes... Reading file 'test19.f90' (format:free) Post-processing... Block: test Block: basic Block: basic2 Block: test_t Post-processing (stage 2)... Block: test Block: unknown_interface Block: basic Block: basic2 Block: test_t Building modules... Building module "test"... Constructing F90 module support for "basic"... Variables: ciao Constructing F90 module support for "basic2"... Variables: ciao getctype: No C-type found in "{'typespec': 'type', 'typename': 'test_t'}", assuming void. Traceback (most recent call last): File "/xlv1/labsoi_devices/bollalo001/bin/f2py", line 26, in main() File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", line 552, in main run_compile() File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", line 539, in run_compile setup(ext_modules = [ext]) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/site-packages/numpy/distutils/core.py", line 174, in setup return old_setup(**new_attr) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/site-packages/numpy/distutils/command/build_src.py", line 87, in run self.build_sources() File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/site-packages/numpy/distutils/command/build_src.py", line 106, in build_sources self.build_extension_sources(ext) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/site-packages/numpy/distutils/command/build_src.py", line 218, in build_extension_sources sources = self.f2py_sources(sources, ext) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/site-packages/numpy/distutils/command/build_src.py", line 471, in f2py_sources ['-m',ext_name]+f_sources) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", line 362, in run_main ret=buildmodules(postlist) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", line 314, in buildmodules dict_append(ret[mnames[i]],rules.buildmodule(modules[i],um)) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/site-packages/numpy/f2py/rules.py", line 1130, in buildmodule mr,wrap = f90mod_rules.buildhooks(m) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/site-packages/numpy/f2py/f90mod_rules.py", line 127, in buildhooks at = capi_maps.c2capi_map[ct] KeyError: 'void' Exit 1 ------------------------------------ module basic gives no problems, but module basic2 yes, because of the type construct. what am I doing wrong? thank you very much, lorenzo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfnk at ix.netcom.com Mon Jun 11 19:50:22 2007 From: sfnk at ix.netcom.com (Kathy Frank) Date: Mon, 11 Jun 2007 18:50:22 -0500 Subject: [Numpy-discussion] numpy 1.0.3 import error Message-ID: <001c01c7ac83$4c90cdc0$51568304@homepc> I receive the following error when I try to import numpy: $ python Python 2.5.1 (r251:54863, Jun 10 2007, 14:46:50) [GCC 3.4.6 [FreeBSD] 20060305] on freebsd6 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.5/site-packages/numpy/__init__.py", line 43, in import linalg File "/usr/local/lib/python2.5/site-packages/numpy/linalg/__init__.py", line 4, in from linalg import * File "/usr/local/lib/python2.5/site-packages/numpy/linalg/linalg.py", line 25, in from numpy.linalg import lapack_lite ImportError: /usr/local/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so: mmap returned wrong address: wanted 0x8048000, got 0x28cc8000 I am running python 2.5.1 on FreeBSD 6.2. I compiled numpy with gcc 3.4.6 and with ATLAS 3.6.0 with no errors. Any help on this error would be appreciated. Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From xt25 at cornell.edu Tue Jun 12 11:58:28 2007 From: xt25 at cornell.edu (Xuemei Tang) Date: Tue, 12 Jun 2007 11:58:28 -0400 (EDT) Subject: [Numpy-discussion] question about numpy Message-ID: <38423.132.236.86.109.1181663908.squirrel@webmail.cornell.edu> Dear Sir/Madam, I meet a problem when I installed numpy. I installed numpy by the command "python setup.py install". Then I tested it by "python -c 'import numpy; numpy.test()'". But it doesn't work. There is an error message: "Running from numpy source directory. Traceback (most recent call last): File "", line 1, in ? AttributeError: 'module' object has no attribute 'test'" Could you help me to figure out what's wrong with it? Thanks! Best wishes! Xuemei Tang From w.woods at ynic.york.ac.uk Fri Jun 15 09:31:38 2007 From: w.woods at ynic.york.ac.uk (Will Woods) Date: Fri, 15 Jun 2007 14:31:38 +0100 Subject: [Numpy-discussion] randint for long type (permutations) In-Reply-To: References: <467136A2.9040707@ynic.york.ac.uk> Message-ID: <467294BA.4010609@ynic.york.ac.uk> The range of N I need is from 5-100, which spans the highly likely to highly improbable for M around 1000-10000. The permutation can be derived from an integer using the algorithm here: http://en.wikipedia.org/wiki/Permutation I have found the solution to the random number generator in the random module (I had only looked in numpy.random previously!): getrandbits(k) Returns a python long int with k random bits. This method is supplied with the MersenneTwister generator and some other generators may also provide it as an optional part of the API. When available, getrandbits() enables randrange() to handle arbitrarily large ranges. New in version 2.4. Thanks for all the replies. Will Anne Archibald wrote: > On 14/06/07, Will Woods wrote: > >> I want to choose a subset of all possible permutations of a sequence of >> length N, with each element of the subset unique. This is then going to >> be scattered across multiple machines using mpi. Since there is a >> one-to-one mapping between the integers in the range 0 <= x < N! and the >> possible permutations, one solution would be to choose M < N! integers >> randomly, check for uniqueness, and then scatter only the integers so >> that individual nodes can construct the permutations. However the >> integers need to be of type long, and randint doesn't work for numbers >> which cannot be converted to int. Any suggestions? > > A single integer might not be the best representation of a permutation > (I can't see just now how you encode it, actually). Why not represent > it as a tuple of n integers a_i with a_i<=i? (to get a permutation > from this, treat a_i as an instruction to swap element i with element > a_i). This should be (I haven't proven it but I'm pretty sure) a > bijective representation of permutations. Not very compact, though I > suppose you could use an array of 8-bit integers for n<256. It will > serve as a key in dictionaries, though, and converting from a > permutation in some other representation (as returned by argsort?) to > this shouldn't be too difficult. Converting these to longs is also not > too difficult (sum(a_i[1:]*cumprod(i[1:])) nor is the reverse > operation. > > And of course generating the permutation randomly becomes easy. > > Also, it's worth noting that if n is even moderately large, n! is such > a staggering number that the probability you will ever generate the > same permutation twice is less than the probability that your data > will be modified undetectably in memory by cosmic rays. > > Anne > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion -- Will Woods York Neuroimaging Centre The Biocentre York Science Park Innovation Way Heslington York YO10 5DG http://www.ynic.york.ac.uk From atossava at cc.helsinki.fi Tue Jun 12 06:11:50 2007 From: atossava at cc.helsinki.fi (Atro Tossavainen) Date: Tue, 12 Jun 2007 13:11:50 +0300 (EEST) Subject: [Numpy-discussion] Building NumPy 1.0.3 on SGI/IRIX system Message-ID: <200706121011.l5CABosq030300@ruuvi.it.helsinki.fi> (Am not a list member, please cc copies to me explicitly.) Regarding recent conversation between Charles R Harris and Mary Haley: 1) The problem with Numpy is that *something* causes -L/usr/lib to be included explicitly. This should never be done! The compiler ABI switches -32, -n32, -64 automatically cause the inclusion of the library directories for the correct ABI. If you explicitly include one of /usr/lib{,32,64} with -L, the compiler ABI choice probably conflicts with it, and even if you don't explicitly specify an ABI for the compiler, /etc/compiler.defaults does it for you anyway. 2) Am willing to help with Numpy building/testing on IRIX with MIPS compilers, have considerable expertise building F/OSS software in general and under IRIX in particular. -- Atro Tossavainen (Mr.) / The Institute of Biotechnology at Systems Analyst, Techno-Amish & / the University of Helsinki, Finland, +358-9-19158939 UNIX Dinosaur / employs me, but my opinions are my own. < URL : http : / / www . helsinki . fi / %7E atossava / > NO FILE ATTACHMENTS From krauss1 at ameritech.net Wed Jun 13 08:56:47 2007 From: krauss1 at ameritech.net (Tom K.) Date: Wed, 13 Jun 2007 12:56:47 -0000 Subject: [Numpy-discussion] Is this an indexing bug? Message-ID: <1181739407.629903.108000@i38g2000prf.googlegroups.com> >>> h = zeros((1, 4, 100)) >>> h[0,:,arange(14)].shape (14, 4) From cookedm at physics.mcmaster.ca Fri Jun 15 15:44:37 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 15 Jun 2007 15:44:37 -0400 Subject: [Numpy-discussion] question about numpy In-Reply-To: <38423.132.236.86.109.1181663908.squirrel@webmail.cornell.edu> References: <38423.132.236.86.109.1181663908.squirrel@webmail.cornell.edu> Message-ID: <20070615194437.GA32712@arbutus.physics.mcmaster.ca> On Tue, Jun 12, 2007 at 11:58:28AM -0400, Xuemei Tang wrote: > Dear Sir/Madam, > > I meet a problem when I installed numpy. I installed numpy by the command > "python setup.py install". Then I tested it by "python -c 'import numpy; > numpy.test()'". But it doesn't work. There is an error message: > "Running from numpy source directory. ^ don't do that :) Instead, change out of the source directory, and rerun. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From stefan at sun.ac.za Sat Jun 16 04:11:55 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 16 Jun 2007 10:11:55 +0200 Subject: [Numpy-discussion] Buildbot for numpy Message-ID: <20070616081155.GC20362@mentat.za.net> Hi all, Short version ============= We now have a numpy buildbot running at http://buildbot.scipy.org Long version ============ Albert Strasheim and I set up a buildbot for numpy this week. For those of you unfamiliar with The Buildbot, it is """ ...a system to automate the compile/test cycle required by most software projects to validate code changes. By automatically rebuilding and testing the tree each time something has changed, build problems are pinpointed quickly, before other developers are inconvenienced by the failure. The guilty developer can be identified and harassed without human intervention. By running the builds on a variety of platforms, developers who do not have the facilities to test their changes everywhere before checkin will at least know shortly afterwards whether they have broken the build or not. Warning counts, lint checks, image size, compile time, and other build parameters can be tracked over time, are more visible, and are therefore easier to improve. The overall goal is to reduce tree breakage and provide a platform to run tests or code-quality checks that are too annoying or pedantic for any human to waste their time with. Developers get immediate (and potentially public) feedback about their changes, encouraging them to be more careful about testing before checkin. """ While we are still working on automatic e-mail notifications, the system already provides valuable feedback -- take a look at the waterfall display: http://buildbot.scipy.org If your platform is not currently on the list, please consider volunteering a machine as a build slave. This machine will be required to run the buildbot client, and to build a new version of numpy whenever changes are made to the repository. (The machine does not have to be dedicated to this task, and can be your own workstation.) We'd like to thank Robert Kern, Jeff Strunk and Gert-Jan van Rooyen who helped us to get the ball rolling, as well as Neilen Marais for offering his workstation as a build slave. Regards St?fan From david at ar.media.kyoto-u.ac.jp Sat Jun 16 04:57:58 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 16 Jun 2007 17:57:58 +0900 Subject: [Numpy-discussion] Buildbot for numpy In-Reply-To: <20070616081155.GC20362@mentat.za.net> References: <20070616081155.GC20362@mentat.za.net> Message-ID: <4673A616.5070309@ar.media.kyoto-u.ac.jp> Stefan van der Walt wrote: > Hi all, > > Short version > ============= > > We now have a numpy buildbot running at > > http://buildbot.scipy.org > > Long version > ============ > > Albert Strasheim and I set up a buildbot for numpy this week. For > those of you unfamiliar with The Buildbot, it is > > """ > ...a system to automate the compile/test cycle required by most > software projects to validate code changes. By automatically > rebuilding and testing the tree each time something has changed, build > problems are pinpointed quickly, before other developers are > inconvenienced by the failure. The guilty developer can be identified > and harassed without human intervention. By running the builds on a > variety of platforms, developers who do not have the facilities to > test their changes everywhere before checkin will at least know > shortly afterwards whether they have broken the build or not. Warning > counts, lint checks, image size, compile time, and other build > parameters can be tracked over time, are more visible, and are > therefore easier to improve. > > The overall goal is to reduce tree breakage and provide a platform to > run tests or code-quality checks that are too annoying or pedantic for > any human to waste their time with. Developers get immediate (and > potentially public) feedback about their changes, encouraging them to > be more careful about testing before checkin. > """ > > While we are still working on automatic e-mail notifications, the > system already provides valuable feedback -- take a look at the > waterfall display: > > http://buildbot.scipy.org > > If your platform is not currently on the list, please consider > volunteering a machine as a build slave. This machine will be > required to run the buildbot client, and to build a new version of > numpy whenever changes are made to the repository. (The machine does > not have to be dedicated to this task, and can be your own > workstation.) > > We'd like to thank Robert Kern, Jeff Strunk and Gert-Jan van Rooyen > who helped us to get the ball rolling, as well as Neilen Marais for > offering his workstation as a build slave. > This is really great new :) Thanks for the hard work, David From david at ar.media.kyoto-u.ac.jp Sat Jun 16 05:59:34 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 16 Jun 2007 18:59:34 +0900 Subject: [Numpy-discussion] Specifying compiler command line options for numpy.disutils.core In-Reply-To: <51077.193.17.11.23.1181918982.squirrel@webmail.marquardt.sc> References: <20369.84.167.72.134.1181839783.squirrel@webmail.marquardt.sc> <46720F11.1030307@ar.media.kyoto-u.ac.jp> <51077.193.17.11.23.1181918982.squirrel@webmail.marquardt.sc> Message-ID: <4673B486.1060006@ar.media.kyoto-u.ac.jp> Christian Marquardt wrote: > > Yes, this is possible - icc does use the standard system libraries. But > depending on the compiler options, icc will require additional libraries > from it's own set of libs. For example, with the -x[...] and -ax[...] > options which exploit the floating point pipelines of the Intel cpus, it's > using it's very own libsvml (vector math lib or something) which replace > some of the math versions in the system lib. If the linker - runtime or > static - doesn't know about these, linking will fail. > > Therefore, if an icc object with certain optimisation is linked with gcc > without specifying the required optimised libraries, linking fails. I > remember that this also happened for me when building an optimised version > of numpy and trying to load it from a gcc-compiled and linked version of > Python. Actually, if I remember correctly, this is even a problem for the > icc itself; try to link a program from optimised objects with icc without > giving the same -x[...] options to the linker... > > It might be possible that the shared libraries can be told where > additional required shared libraries are located (--rpath?), If I understand correctly, that would be --rpath-link. --rpath only helps locating libraries for runtime, whereas --rpath-link helps for locating libraries for linking. values for --rpath-link may use --rpath values, though. David From svetosch at gmx.net Sat Jun 16 10:49:45 2007 From: svetosch at gmx.net (Sven Schreiber) Date: Sat, 16 Jun 2007 15:49:45 +0100 Subject: [Numpy-discussion] Is this an indexing bug? In-Reply-To: <1181739407.629903.108000@i38g2000prf.googlegroups.com> References: <1181739407.629903.108000@i38g2000prf.googlegroups.com> Message-ID: <4673F889.1020902@gmx.net> Tom K. schrieb: >>>> h = zeros((1, 4, 100)) >>>> h[0,:,arange(14)].shape > (14, 4) > After reading section 3.4.2.1 of the numpy book, I also still don't expect this result. So if it's not a bug, I'd be glad if some expert could explain why not. Thanks, Sven From cookedm at physics.mcmaster.ca Sat Jun 16 15:37:44 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sat, 16 Jun 2007 15:37:44 -0400 Subject: [Numpy-discussion] Buildbot for numpy In-Reply-To: <20070616081155.GC20362@mentat.za.net> References: <20070616081155.GC20362@mentat.za.net> Message-ID: <20070616193744.GA5276@arbutus.physics.mcmaster.ca> On Sat, Jun 16, 2007 at 10:11:55AM +0200, Stefan van der Walt wrote: > Hi all, > > Short version > ============= > > We now have a numpy buildbot running at > > http://buildbot.scipy.org > > While we are still working on automatic e-mail notifications, the > system already provides valuable feedback -- take a look at the > waterfall display: > > http://buildbot.scipy.org > > If your platform is not currently on the list, please consider > volunteering a machine as a build slave. This machine will be > required to run the buildbot client, and to build a new version of > numpy whenever changes are made to the repository. (The machine does > not have to be dedicated to this task, and can be your own > workstation.) Awesome. I've got a iBook (PPC G4) running OS X that can be used as a slave (it's just being a server right now). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From amsd2013 at yahoo.com Sun Jun 17 00:06:04 2007 From: amsd2013 at yahoo.com (Ali Santacruz) Date: Sat, 16 Jun 2007 21:06:04 -0700 (PDT) Subject: [Numpy-discussion] Problems with Numeric Message-ID: <661106.46739.qm@web62303.mail.re1.yahoo.com> Hi dear list, I am designing an application that uses GDAL (which seems to bind to Numeric) to read an image and convert it to an array. The code runs without problem in the console, then I compile it (apparently without problems), but when I try to launch the application, it fails and the next error appears in the .log: Traceback (most recent call last): File "ViewGdal_0.0-1.py", line 10, in ? File "gdal\__init__.pyc", line 11, in ? File "gdal\gdalnumeric.pyc", line 85, in ? #from Numeric import * File "Numeric.pyc", line 93, in ? #from Precision import * File "Precision.pyc", line 26, in ? #_code_table = _fill_table(typecodes) File "Precision.pyc", line 23, in _fill_table #table[key] = _get_precisions(value) File "Precision.pyc", line 18, in _get_precisions TypeError: data type not understood This refers to the next in Precision.py in Numeric: typecodes = {'Character':'c', 'Integer':'1sil', 'UnsignedInteger':'bwu', 'Float':'fd', 'Complex':'FD'} def _get_precisions(typecodes): lst = [] for t in typecodes: lst.append( (zeros( (1,), t ).itemsize()*8, t) ) #this is line 18 return lst As far as I understand (I?ve been working with Python for just a couple of months), there is a problem with the typecodes. What can I do to solve this problem? I?ve not found a version of GDAL that binds to Numpy. I am using GDAL from hobutools (v. 1.75), Numeric v. 24.2, Python 2.4.3 on WinXP. Any help is _really_ appreciated. Regards, Ali S. __________________________________________________ Correo Yahoo! Espacio para todos tus mensajes, antivirus y antispam ?gratis! Reg?strate ya - http://correo.espanol.yahoo.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sun Jun 17 09:34:12 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 17 Jun 2007 15:34:12 +0200 Subject: [Numpy-discussion] question about numpy In-Reply-To: <20070615194437.GA32712@arbutus.physics.mcmaster.ca> References: <38423.132.236.86.109.1181663908.squirrel@webmail.cornell.edu> <20070615194437.GA32712@arbutus.physics.mcmaster.ca> Message-ID: <20070617133412.GP20362@mentat.za.net> On Fri, Jun 15, 2007 at 03:44:37PM -0400, David M. Cooke wrote: > > I meet a problem when I installed numpy. I installed numpy by the command > > "python setup.py install". Then I tested it by "python -c 'import numpy; > > numpy.test()'". But it doesn't work. There is an error message: > > "Running from numpy source directory. > > ^ don't do that :) > > Instead, change out of the source directory, and rerun. Is there any reason why we can't make that work? St?fan From charlesr.harris at gmail.com Sun Jun 17 11:25:01 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 17 Jun 2007 09:25:01 -0600 Subject: [Numpy-discussion] Problems with Numeric In-Reply-To: <661106.46739.qm@web62303.mail.re1.yahoo.com> References: <661106.46739.qm@web62303.mail.re1.yahoo.com> Message-ID: On 6/16/07, Ali Santacruz wrote: > > Hi dear list, > > I am designing an application that uses GDAL (which seems to bind to > Numeric) to read an image and convert it to an array. The code runs without > problem in the console, then I compile it (apparently without problems), but > when I try to launch the application, it fails and the next error appears in > the .log: > > > > Traceback (most recent call last): > > File "ViewGdal_0.0-1.py", line 10, in ? > > File "gdal\__init__.pyc", line 11, in ? > > File "gdal\gdalnumeric.pyc", line 85, in ? #from > Numeric import * > > File "Numeric.pyc", line 93, in ? #from > Precision import * > > File "Precision.pyc", line 26, in ? #_code_table > = _fill_table(typecodes) > > File "Precision.pyc", line 23, in _fill_table #table[key] > = _get_precisions(value) > > File "Precision.pyc", line 18, in _get_precisions > > TypeError: data type not understood > > > > This refers to the next in Precision.py in Numeric: > > > > typecodes = {'Character':'c', 'Integer':'1sil', 'UnsignedInteger':'bwu', > 'Float':'fd', 'Complex':'FD'} > > def _get_precisions(typecodes): > > lst = [] > > for t in typecodes: > > lst.append( (zeros( (1,), t ).itemsize()*8, t) ) > #this is line 18 > > return lst > > > > As far as I understand (I've been working with Python for just a couple of > months), there is a problem with the typecodes. What can I do to solve this > problem? > > > > I've not found a version of GDAL that binds to Numpy. I am using GDAL from > hobutools (v. 1.75), Numeric v. 24.2, Python 2.4.3 on WinXP. > Looks like GDAL supports numpy These facilities have evolved somewhat over time. In the past the package was known as "Numeric" and imported using "import Numeric". A new generation is imported using "import *numpy*". Currently the old generation bindings only support the older Numeric package, and the new generatio bindings only support the new generation *numpy* package. They are mostly compatible, and by importing gdalnumeric you will get whichever is appropriate to the current bindings type. I would look for a package with a more recent version of GDAL that supports numpy. Apart from that, I can't help you. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From amsd2013 at yahoo.com Sun Jun 17 12:57:04 2007 From: amsd2013 at yahoo.com (Ali Santacruz) Date: Sun, 17 Jun 2007 09:57:04 -0700 (PDT) Subject: [Numpy-discussion] Problems with Numeric Message-ID: <315035.59061.qm@web62311.mail.re1.yahoo.com> Thank you very much Chuck for your help. Regards, Ali S. ----- Mensaje original ---- De: Charles R Harris Para: Discussion of Numerical Python Enviado: domingo, 17 de junio, 2007 10:25:01 Asunto: Re: [Numpy-discussion] Problems with Numeric On 6/16/07, Ali Santacruz wrote: Hi dear list, I am designing an application that uses GDAL (which seems to bind to Numeric) to read an image and convert it to an array. The code runs without problem in the console, then I compile it (apparently without problems), but when I try to launch the application, it fails and the next error appears in the .log: Traceback (most recent call last): File "ViewGdal_0.0-1.py", line 10, in ? File "gdal\__init__.pyc", line 11, in ? File "gdal\gdalnumeric.pyc", line 85, in ? #from Numeric import * File "Numeric.pyc", line 93, in ? #from Precision import * File "Precision.pyc", line 26, in ? #_code_table = _fill_table(typecodes) File "Precision.pyc", line 23, in _fill_table #table[key] = _get_precisions(value) File "Precision.pyc", line 18, in _get_precisions TypeError: data type not understood This refers to the next in Precision.py in Numeric: typecodes = {'Character':'c', 'Integer':'1sil', 'UnsignedInteger':'bwu', 'Float':'fd', 'Complex':'FD'} def _get_precisions(typecodes): lst = [] for t in typecodes: lst.append( (zeros( (1,), t ).itemsize()*8, t) ) #this is line 18 return lst As far as I understand (I've been working with Python for just a couple of months), there is a problem with the typecodes. What can I do to solve this problem? I've not found a version of GDAL that binds to Numpy. I am using GDAL from hobutools (v. 1.75), Numeric v. 24.2, Python 2.4.3 on WinXP. Looks like GDAL supports numpy These facilities have evolved somewhat over time. In the past the package was known as "Numeric" and imported using "import Numeric". A new generation is imported using "import numpy". Currently the old generation bindings only support the older Numeric package, and the new generatio bindings only support the new generation numpy package. They are mostly compatible, and by importing gdalnumeric you will get whichever is appropriate to the current bindings type. I would look for a package with a more recent version of GDAL that supports numpy. Apart from that, I can't help you. Chuck _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion __________________________________________________ Correo Yahoo! Espacio para todos tus mensajes, antivirus y antispam ?gratis! Reg?strate ya - http://correo.espanol.yahoo.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Jun 17 15:48:38 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 17 Jun 2007 14:48:38 -0500 Subject: [Numpy-discussion] question about numpy In-Reply-To: <20070617133412.GP20362@mentat.za.net> References: <38423.132.236.86.109.1181663908.squirrel@webmail.cornell.edu> <20070615194437.GA32712@arbutus.physics.mcmaster.ca> <20070617133412.GP20362@mentat.za.net> Message-ID: <46759016.9060905@gmail.com> Stefan van der Walt wrote: > On Fri, Jun 15, 2007 at 03:44:37PM -0400, David M. Cooke wrote: >>> I meet a problem when I installed numpy. I installed numpy by the command >>> "python setup.py install". Then I tested it by "python -c 'import numpy; >>> numpy.test()'". But it doesn't work. There is an error message: >>> "Running from numpy source directory. >> ^ don't do that :) >> >> Instead, change out of the source directory, and rerun. > > Is there any reason why we can't make that work? We have to be able to bootstrap the build process somehow. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Mon Jun 18 02:46:38 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 18 Jun 2007 15:46:38 +0900 Subject: [Numpy-discussion] SharedLibrary builder (ticket 213) Message-ID: <46762A4E.6030806@ar.media.kyoto-u.ac.jp> Hi, following the discussion on using ctypes in scipy, the main problem seems the inability to build a usable dll usable by ctypes using distutils. Numpy ticket 213 tackles the issue; as I need it personnally, I am willing to work on the feature, but I would need a bit some advices on numpy.distutils: - First, is anyone working on the SharedLibrary right now ? - how to add a new argument to numpy.distutils.setup ? The way I see it, but I don't understand the whole distutils arch yet so I may be wrong, would be to add one argument to the setup function, like shared_libraries, which would contains a list of objects representing the libraries (sources, name, etc...), and which would be build by a special builder. But right now, I don't know how the whole calling sequence work such as my own builder is called for the items in a given argument of setup. cheers, David From lbolla at gmail.com Mon Jun 18 03:18:24 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Mon, 18 Jun 2007 09:18:24 +0200 Subject: [Numpy-discussion] f2py and type construct In-Reply-To: <80c99e790706110511mc456daes2fe664e195be2aaf@mail.gmail.com> References: <80c99e790706110511mc456daes2fe664e195be2aaf@mail.gmail.com> Message-ID: <80c99e790706180018u4a23f128r46b52e2e7a9913e9@mail.gmail.com> hi all. I'm trying to compile an F90 source file with f2py, but it fails with the construct "type ... end type". here is an example: -------------------- ! file test19.f90 module basic implicit none save integer, parameter :: ciao = 17 end module basic module basic2 implicit none save type test_t integer :: x end type test_t type(test_t) :: ciao end module basic2 ---------------------- $>f2py -c test19.f90 -m test --fcompiler=intele --compiler=intel (I'm compiling on an SGI Altix) and this is the error message: ---------------------- running build running config_fc running build_src building extension "test" sources f2py options: [] f2py:> /tmp/tmprBrnf7/src.linux-ia64-2.5/testmodule.c creating /tmp/tmprBrnf7 creating /tmp/tmprBrnf7/src.linux- ia64-2.5 Reading fortran codes... Reading file 'test19.f90' (format:free) Post-processing... Block: test Block: basic Block: basic2 Block: test_t Post-processing (stage 2)... Block: test Block: unknown_interface Block: basic Block: basic2 Block: test_t Building modules... Building module "test"... Constructing F90 module support for "basic"... Variables: ciao Constructing F90 module support for "basic2"... Variables: ciao getctype: No C-type found in "{'typespec': 'type', 'typename': 'test_t'}", assuming void. Traceback (most recent call last): File "/xlv1/labsoi_devices/bollalo001/bin/f2py", line 26, in main() File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", line 552, in main run_compile() File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", line 539, in run_compile setup(ext_modules = [ext]) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/site-packages/numpy/distutils/core.py", line 174, in setup return old_setup(**new_attr) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/xlv1/labsoi_devices/bollalo001/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() [...] KeyError: 'void' Exit 1 ------------------------------------ module basic gives no problems, but module basic2 yes, because of the type construct. it seems that f2py doesn't support "type" construct. am I right? is there a workaround? thank you very much, lorenzo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at stevesimmons.com Mon Jun 18 10:41:47 2007 From: mail at stevesimmons.com (Stephen Simmons) Date: Tue, 19 Jun 2007 00:41:47 +1000 Subject: [Numpy-discussion] Anyone written an SQL-like interface to numpy/PyTables? Message-ID: <467699AB.2000507@stevesimmons.com> Hi, Has anyone written a parser for SQL-like queries against PyTables HDF tables or numpy recarrays? I'm asking because I have written code for grouping then summing rows of source data, where the groups are defined by functions of the source data, or looking up a related field in a separate lookup tables. I use this for tracking the performance of customer segments by status, with 48 monthly files of product usage/customer status data on 5m customers. Each of these is a 5m row HDF file, with several other 5m row HDF files that are used to work out which segment a customer belongs to. This grouping and summing is equivalent to something like the following SQL code: SELECT grp_fn1(table1.*), grp_fn2(table1.*), grp_fn3(table2.*), grp_fn4(table3.*), count(table1.*), sum(table1.field1), sum(table1.field2), ..., sum(table1.fieldK) FROM table1 PARTITION(date) LEFT JOIN table2 ON table1.field0=table2.field0 LEFT JOIN table3 ON table2.field0=table3.field0 WHERE min_date<=date<=max_date GROUP BY grp_fn1(table1.*), grp_fn2(table1.*), grp_fn3(table2.*), grp_fn4(table3.*) I'm using numpy.bincount() function to do the grouping/summing, numpy.searchsorted() for fast lookup tables implementing the grouping functions grp_fn(), and some other C functions for a fast "zip" join of related tables whose primary keys are in the same order as the monthly date partitions. The Python code that specifies the grouping/summing fields looks like this: agg = HDFAggregator('table1.hdf') agg.add_group_function('MONTH', ... ) agg.add_group_function('SEGMENT', ... ) agg.add_group_function('STATUS', ... ) agg.do_aggregation(groupby='MONTH SEGMENT STATUS', count='CUST_NO', sum=') agg.add_calculated_field('PROFIT', 'VOLUME*(PRICE* (1-DISCOUNT)-COGS) - COSTOFSALES') agg.save('output.hdf') On my laptop, this zips over my data at a speed of 400k rows/sec, aggregating it into 230,000 groups (48 months x 120 customer segments/subsegments x 5 product groups x 8 statuses) with subtotals for 30 data fields in each group. This is essentially as fast as PyTables can read in the HDF files from disk; peak speeds with fewer groups (e.g. 48x5x1x4) are above 1Mrows/sec if the HDF files are already in the disk cache. One option I am considering now is bolting an SQL-like parser on the front to provide a more natural interface for those unfortunate people who prefer SQL to Python. I don't want to write an SQL parser from scratch, so it would be great to know if there are any existing projects to put an SQL-like interface on numpy or PyTables (other than numexpr). So has anyone looked at using an SQL-like syntax for querying numpy/PyTables data? Cheers Stephen From bioinformed at gmail.com Mon Jun 18 11:19:33 2007 From: bioinformed at gmail.com (Kevin Jacobs ) Date: Mon, 18 Jun 2007 11:19:33 -0400 Subject: [Numpy-discussion] Anyone written an SQL-like interface to numpy/PyTables? In-Reply-To: <467699AB.2000507@stevesimmons.com> References: <467699AB.2000507@stevesimmons.com> Message-ID: <2e1434c10706180819h58e0c872r2613a7b973a480ef@mail.gmail.com> I've often thought it would be interesting if someone would build a custom table adapter to use PyTables in SQLlite. Ie, essentially bolting a SQL parser and query engine on top of PyTables. Unfortunately, I don't have time to do this, though hopefully someone will at some point. -Kevin On 6/18/07, Stephen Simmons wrote: > > Hi, > > Has anyone written a parser for SQL-like queries against PyTables HDF > tables or numpy recarrays? > > I'm asking because I have written code for grouping then summing rows of > source data, where the groups are defined by functions of the source > data, or looking up a related field in a separate lookup tables. I use > this for tracking the performance of customer segments by status, with > 48 monthly files of product usage/customer status data on 5m customers. > Each of these is a 5m row HDF file, with several other 5m row HDF files > that are used to work out which segment a customer belongs to. > > This grouping and summing is equivalent to something like the following > SQL code: > SELECT > grp_fn1(table1.*), grp_fn2(table1.*), grp_fn3(table2.*), > grp_fn4(table3.*), > count(table1.*), > sum(table1.field1), sum(table1.field2), ..., sum(table1.fieldK) > FROM table1 PARTITION(date) > LEFT JOIN table2 ON table1.field0=table2.field0 > LEFT JOIN table3 ON table2.field0=table3.field0 > WHERE min_date<=date<=max_date > GROUP BY grp_fn1(table1.*), grp_fn2(table1.*), grp_fn3(table2.*), > grp_fn4(table3.*) > > I'm using numpy.bincount() function to do the grouping/summing, > numpy.searchsorted() for fast lookup tables implementing the grouping > functions grp_fn(), and some other C functions for a fast "zip" join of > related tables whose primary keys are in the same order as the monthly > date partitions. > > The Python code that specifies the grouping/summing fields looks like > this: > agg = HDFAggregator('table1.hdf') > agg.add_group_function('MONTH', ... ) > agg.add_group_function('SEGMENT', ... ) > agg.add_group_function('STATUS', ... ) > agg.do_aggregation(groupby='MONTH SEGMENT STATUS', count='CUST_NO', > sum=') > agg.add_calculated_field('PROFIT', 'VOLUME*(PRICE* (1-DISCOUNT)-COGS) > - COSTOFSALES') > agg.save('output.hdf') > > On my laptop, this zips over my data at a speed of 400k rows/sec, > aggregating it into 230,000 groups (48 months x 120 customer > segments/subsegments x 5 product groups x 8 statuses) with subtotals for > 30 data fields in each group. This is essentially as fast as PyTables > can read in the HDF files from disk; peak speeds with fewer groups (e.g. > 48x5x1x4) are above 1Mrows/sec if the HDF files are already in the disk > cache. > > One option I am considering now is bolting an SQL-like parser on the > front to provide a more natural interface for those unfortunate people > who prefer SQL to Python. I don't want to write an SQL parser from > scratch, so it would be great to know if there are any existing projects > to put an SQL-like interface on numpy or PyTables (other than numexpr). > > So has anyone looked at using an SQL-like syntax for querying > numpy/PyTables data? > > Cheers > > Stephen > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Mon Jun 18 11:30:39 2007 From: sturla at molden.no (Sturla Molden) Date: Mon, 18 Jun 2007 17:30:39 +0200 Subject: [Numpy-discussion] Issues with the memmap object Message-ID: <4676A51F.7040206@molden.no> After struggling with NumPy's memmap object, I examined the code and detected three severe problems. I suggest that memmap is removed from NumPy, at least on Windows, as it's shortcomings is severe and undocumented. Problem 1: I/O errors are never detected on Win32: On Windows, i/o errors are trapped using structured exception handling when using memory mapped objects. Neither NumPy nor Python use structured exception handling on Win32. This means that i/o errors (such as network or disk failure) will go undetected, and be a source of obscure bugs. The bugfix for this is to wrap any access attempt to an PyArrayObject's "data" pointer with __try and __except blocks, and using an MSVC compiler on Windows. GCC/MinGW cannot be used, as it does not support structured exception handling. In other words, PyArrayObject *memmap; __try { /* safe read/write access to memmap->data here */ } __except( GetExceptionCode() == EXCEPTION_IN_PAGE_ERROR ? EXCEPTION_EXECUTE_HANDLER : EXCEPTION_CONTINUE_SEARCH) { /* Windows signaled an I/O error, handle the problem here */ } Not only must NumPy itself be rewritten, but also any library getting a data pointer from a NumPy memmap array. Fixing this will be extremely difficult, if not impossible. The only safe way to access file data from NumPy is numpy.fromfile() and numpy.array.tofile(). Problem 2: Mapping always starts from the beginning of the file: Python's standard mmap object from the beginning of the file, regardless of the size. NumPy's memmap object depends on Python's mmap through the buffer protocol. Even though NumPy's memmap object takes an offset parameter, the actual memory mapping starts from the beginning of the file. Thus, virtual memory equal to the memmap object's offset parameter will be leaked until the memmap object is deleted. Problem 3: No 64 bit support on Windows or Linux: On Linux, large files must be memory mapped using mmap64 (or mmap2 if 4k boundaries are acceptable). On Windows, CreateFileMapping/MapViewOfFile has 64 bit support, but Python's mmap does not use it (the high offset DWORD is always zero). Only files smaller than 4 GB can be memory mapped. Regards, Sturla Molden From Chris.Barker at noaa.gov Mon Jun 18 12:16:06 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 18 Jun 2007 09:16:06 -0700 Subject: [Numpy-discussion] Buildbot for numpy In-Reply-To: <20070616193744.GA5276@arbutus.physics.mcmaster.ca> References: <20070616081155.GC20362@mentat.za.net> <20070616193744.GA5276@arbutus.physics.mcmaster.ca> Message-ID: <4676AFC6.3080404@noaa.gov> David M. Cooke wrote: > Awesome. I've got a iBook (PPC G4) running OS X that can be used as a slave > (it's just being a server right now). It looks like they already have a PPC OS-X box. Anyone have an Intel machine to offer up? (mine's a PPC Dual G5) -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From cookedm at physics.mcmaster.ca Mon Jun 18 14:37:51 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 18 Jun 2007 14:37:51 -0400 Subject: [Numpy-discussion] Buildbot for numpy In-Reply-To: <4676AFC6.3080404@noaa.gov> References: <20070616081155.GC20362@mentat.za.net> <20070616193744.GA5276@arbutus.physics.mcmaster.ca> <4676AFC6.3080404@noaa.gov> Message-ID: <20070618183751.GA17345@arbutus.physics.mcmaster.ca> On Mon, Jun 18, 2007 at 09:16:06AM -0700, Christopher Barker wrote: > David M. Cooke wrote: > > Awesome. I've got a iBook (PPC G4) running OS X that can be used as a slave > > (it's just being a server right now). > > It looks like they already have a PPC OS-X box. Anyone have an Intel > machine to offer up? (mine's a PPC Dual G5) That one's mine. You can tell because it's slow ;-) A 64-bit G5 build would be good too, as its longdouble semantics are different IIRC. And I don't think there's a Python 2.3 builder yet? -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From mike.ressler at alum.mit.edu Mon Jun 18 15:24:37 2007 From: mike.ressler at alum.mit.edu (Mike Ressler) Date: Mon, 18 Jun 2007 12:24:37 -0700 Subject: [Numpy-discussion] Issues with the memmap object In-Reply-To: <4676A51F.7040206@molden.no> References: <4676A51F.7040206@molden.no> Message-ID: <268febdf0706181224j6cb5a4a6p3ac86a777cd10f8@mail.gmail.com> What versions of python and numpy are you using? On 6/18/07, Sturla Molden wrote: > Problem 3: No 64 bit support on Windows or Linux: > > On Linux, large files must be memory mapped using mmap64 (or mmap2 if 4k > boundaries are acceptable). On Windows, CreateFileMapping/MapViewOfFile > has 64 bit support, but Python's mmap does not use it (the high offset > DWORD is always zero). Only files smaller than 4 GB can be memory mapped. With python 2.5.1 and numpy 1.0.3 under Fedora 7 x86_64, I just now memmap'ed a 10 GB image cube without any trouble. I can't comment about what will or won't work under Windows, but I've been doing > 4 GB files with linux ever since 64-bit support showed up in the early betas of python 2.5 and numpy 1.0. memmap'ing certainly isn't broken for me - removing it would be devastating. Mike -- mike.ressler at alum.mit.edu From Chris.Barker at noaa.gov Mon Jun 18 15:39:53 2007 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 18 Jun 2007 12:39:53 -0700 Subject: [Numpy-discussion] Buildbot for numpy In-Reply-To: <20070618183751.GA17345@arbutus.physics.mcmaster.ca> References: <20070616081155.GC20362@mentat.za.net> <20070616193744.GA5276@arbutus.physics.mcmaster.ca> <4676AFC6.3080404@noaa.gov> <20070618183751.GA17345@arbutus.physics.mcmaster.ca> Message-ID: <4676DF89.9030300@noaa.gov> David M. Cooke wrote: > That one's mine. You can tell because it's slow ;-) A 64-bit G5 build > would be good too, as its longdouble semantics are different IIRC. Well, I have a DualG5, running OS-X 10.4.9, and Python 2.3, 2.4, and 2.5 installed (though I'm only using 2.5 for anything new now). However, I don't think I've got 64bit anything running. Also, I'm behind a pretty rigid firewall -- what protocols are used to communicate between the buildbot and the nodes? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From sturla at molden.no Mon Jun 18 16:48:08 2007 From: sturla at molden.no (Sturla Molden) Date: Mon, 18 Jun 2007 22:48:08 +0200 Subject: [Numpy-discussion] Issues with the memmap object In-Reply-To: <268febdf0706181224j6cb5a4a6p3ac86a777cd10f8@mail.gmail.com> References: <4676A51F.7040206@molden.no> <268febdf0706181224j6cb5a4a6p3ac86a777cd10f8@mail.gmail.com> Message-ID: <4676EF88.4070603@molden.no> On 6/18/2007 9:24 PM, Mike Ressler wrote: > With python 2.5.1 and numpy 1.0.3 under Fedora 7 x86_64, I just now > memmap'ed a 10 GB image cube without any trouble. You have a 64 bit system. On Linux, the off_t uses by mmap's offset is similar to a size_t. Although the larger off_t on a 64 bit system hides the problem for files of that size (10 GB), it is still there on a 32 bit system. Try to memory map the last 4096 bytes of that 10 GB file on a 32 bit system. Memory mapping a single page should not be a problem on any computer. You will see that memmap is broken. Although the offset problem is trivial to solve (it requires a small change to the memmap object), this is not the case with the i/o error problem. It is anything but trivial. Sturla Molden From sturla at molden.no Mon Jun 18 16:59:44 2007 From: sturla at molden.no (Sturla Molden) Date: Mon, 18 Jun 2007 22:59:44 +0200 Subject: [Numpy-discussion] Issues with the memmap object In-Reply-To: <268febdf0706181224j6cb5a4a6p3ac86a777cd10f8@mail.gmail.com> References: <4676A51F.7040206@molden.no> <268febdf0706181224j6cb5a4a6p3ac86a777cd10f8@mail.gmail.com> Message-ID: <4676F240.9000903@molden.no> On 6/18/2007 9:24 PM, Mike Ressler wrote: > What versions of python and numpy are you using? I am using Python 2.5.1 and Numpy 1.0.3 on Windows XP (32 bit). I examined the code in SVN, and drew my conclusions from that. Sidenote on trapping i/o error on Windows: On Windows, i/o errors must be solved using "structured exception handling" (SEH) or "vectored exception handling" (VEH). If Python can handle asynchronous exceptions (a.k.a. software interrupts), one could possible use VEH to deal with i/o errors associated with memory mapped files on Windows. Can Python accept a PyExc_IOError asynchronously in a thread? A function must return NULL to inform Python of an exception, but here it has no knowledge of the error even occurring! When the registered handler returns, execution resumes where the error occurred, as if it never happened. Thus, Python must be informed of the exception regardless of the return value. Getting Python to raise an IOError when Windows raises a EXCEPTION_IN_PAGE_ERROR is rather tricky. At least I don't know how to do it, but that is what we need for memmap to be safe on Windows. I regret that I do not know how Linux signals i/o errors from memory mapped files. Sturla Molden From david at ar.media.kyoto-u.ac.jp Tue Jun 19 04:06:42 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 19 Jun 2007 17:06:42 +0900 Subject: [Numpy-discussion] question about numpy In-Reply-To: <46759016.9060905@gmail.com> References: <38423.132.236.86.109.1181663908.squirrel@webmail.cornell.edu> <20070615194437.GA32712@arbutus.physics.mcmaster.ca> <20070617133412.GP20362@mentat.za.net> <46759016.9060905@gmail.com> Message-ID: <46778E92.2060801@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > Stefan van der Walt wrote: >> On Fri, Jun 15, 2007 at 03:44:37PM -0400, David M. Cooke wrote: >>>> I meet a problem when I installed numpy. I installed numpy by the command >>>> "python setup.py install". Then I tested it by "python -c 'import numpy; >>>> numpy.test()'". But it doesn't work. There is an error message: >>>> "Running from numpy source directory. >>> ^ don't do that :) >>> >>> Instead, change out of the source directory, and rerun. >> Is there any reason why we can't make that work? > > We have to be able to bootstrap the build process somehow. > wouldn't it work with the upcoming new import semantics in python 2.6 ? The problem is that you cannot make the difference between $PWD/numpy and $PYTHONPATH/numpy, or is this more subtle ? David From stefan at sun.ac.za Tue Jun 19 04:45:00 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 19 Jun 2007 10:45:00 +0200 Subject: [Numpy-discussion] question about numpy In-Reply-To: <46778E92.2060801@ar.media.kyoto-u.ac.jp> References: <38423.132.236.86.109.1181663908.squirrel@webmail.cornell.edu> <20070615194437.GA32712@arbutus.physics.mcmaster.ca> <20070617133412.GP20362@mentat.za.net> <46759016.9060905@gmail.com> <46778E92.2060801@ar.media.kyoto-u.ac.jp> Message-ID: <20070619084500.GI20362@mentat.za.net> On Tue, Jun 19, 2007 at 05:06:42PM +0900, David Cournapeau wrote: > Robert Kern wrote: > > Stefan van der Walt wrote: > >> On Fri, Jun 15, 2007 at 03:44:37PM -0400, David M. Cooke wrote: > >>>> I meet a problem when I installed numpy. I installed numpy by the command > >>>> "python setup.py install". Then I tested it by "python -c 'import numpy; > >>>> numpy.test()'". But it doesn't work. There is an error message: > >>>> "Running from numpy source directory. > >>> ^ don't do that :) > >>> > >>> Instead, change out of the source directory, and rerun. > >> Is there any reason why we can't make that work? > > > > We have to be able to bootstrap the build process somehow. > > > wouldn't it work with the upcoming new import semantics in python 2.6 ? > The problem is that you cannot make the difference between $PWD/numpy > and $PYTHONPATH/numpy, or is this more subtle ? I think part of the problem is that the extensions are built into some temporary directory (and I don't know of a way to query distutils for its location), which must also appear on the path for the tests to function properly. Cheers St?fan From svetosch at gmx.net Tue Jun 19 06:19:57 2007 From: svetosch at gmx.net (Sven Schreiber) Date: Tue, 19 Jun 2007 11:19:57 +0100 Subject: [Numpy-discussion] Is this an indexing bug? In-Reply-To: <4673F889.1020902@gmx.net> References: <1181739407.629903.108000@i38g2000prf.googlegroups.com> <4673F889.1020902@gmx.net> Message-ID: <4677ADCD.2020606@gmx.net> Sven Schreiber schrieb: > Tom K. schrieb: >>>>> h = zeros((1, 4, 100)) >>>>> h[0,:,arange(14)].shape >> (14, 4) >> > > After reading section 3.4.2.1 of the numpy book, I also still don't > expect this result. So if it's not a bug, I'd be glad if some expert > could explain why not. > To be more specific, I would expect shape==(4,14). I am going to file a ticket soon if nobody explains why everything is just right and it's only Tom and I who just don't get it ;-) -sven From sturla at molden.no Tue Jun 19 06:14:22 2007 From: sturla at molden.no (Sturla Molden) Date: Tue, 19 Jun 2007 12:14:22 +0200 Subject: [Numpy-discussion] Is this an indexing bug? In-Reply-To: <4677ADCD.2020606@gmx.net> References: <1181739407.629903.108000@i38g2000prf.googlegroups.com> <4673F889.1020902@gmx.net> <4677ADCD.2020606@gmx.net> Message-ID: <4677AC7E.5050704@molden.no> On 6/19/2007 12:19 PM, Sven Schreiber wrote: > To be more specific, I would expect shape==(4,14). >>> h = numpy.zeros((1,4,14)) >>> h[0,:,numpy.arange(14)].shape (14, 4) >>> h[0,:,:].shape (4, 14) >>> h[0,:,numpy.arange(14)] is a case of "sdvanced indexing". You can also see that >>> h[0,:,[0,1,2,3,4,5,6,7,8,9,10,11,12,13]].shape (14, 4) Citing from Travis' book, page 83: "Example 2: Now let X.shape be (10,20,30,40,50) and suppose ind1 and ind2 are broadcastable to the shape (2,3,4). Then X[:,ind1,ind2] has shape (10,2,3,4,40,50) because the (20,30)-shaped subspace from X has been replaced with the (2,3,4) subspace from the indices. However, X[:,ind1,:,ind2,:] has shape (2,3,4,10,30,50) because there is no unambiguous place to drop in the indexing subspace, thus it is tacked-on to the beginning. It is always possible to use .transpose() to move the sups pace anywhere desired. This example cannot be replicated using take." So I think this strange behaviour is actually correct. Sturla Molden From sturla at molden.no Tue Jun 19 06:35:05 2007 From: sturla at molden.no (Sturla Molden) Date: Tue, 19 Jun 2007 12:35:05 +0200 Subject: [Numpy-discussion] Is this an indexing bug? In-Reply-To: <4677AC7E.5050704@molden.no> References: <1181739407.629903.108000@i38g2000prf.googlegroups.com> <4673F889.1020902@gmx.net> <4677ADCD.2020606@gmx.net> <4677AC7E.5050704@molden.no> Message-ID: <4677B159.9030900@molden.no> On 6/19/2007 12:14 PM, Sturla Molden wrote: > h[0,:,numpy.arange(14)] is a case of "sdvanced indexing". You can also > see that > > >>> h[0,:,[0,1,2,3,4,5,6,7,8,9,10,11,12,13]].shape > (14, 4) Another way to explain this is that numpy.arange(14) and [0,1,2,3,4,5,6,7,8,9,10,11,12,13] is a sequence (i.e. iterator). So when NumPy iterates the sequence, the iterator yields a single integer, lets call it I. Using this integer as an index to h, gives a = h[0,:,I] which has shape=(4,). This gives us a sequence of arrays of length 4. In other words, >>> a = numpy.zeros(4) >>> numpy.array([a,a,a,a,a,a,a,a,a,a,a,a,a,a]).shape (14, 4) That is analogous to array([(0,0,0,0), (0,0,0,0), ...., (0,0,0,0)]) Sturla Molden From stefan at sun.ac.za Tue Jun 19 07:28:52 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 19 Jun 2007 13:28:52 +0200 Subject: [Numpy-discussion] Is this an indexing bug? In-Reply-To: <4677B159.9030900@molden.no> References: <1181739407.629903.108000@i38g2000prf.googlegroups.com> <4673F889.1020902@gmx.net> <4677ADCD.2020606@gmx.net> <4677AC7E.5050704@molden.no> <4677B159.9030900@molden.no> Message-ID: <20070619112852.GJ20362@mentat.za.net> On Tue, Jun 19, 2007 at 12:35:05PM +0200, Sturla Molden wrote: > On 6/19/2007 12:14 PM, Sturla Molden wrote: > > > h[0,:,numpy.arange(14)] is a case of "sdvanced indexing". You can also > > see that > > > > >>> h[0,:,[0,1,2,3,4,5,6,7,8,9,10,11,12,13]].shape > > (14, 4) > > Another way to explain this is that numpy.arange(14) and > [0,1,2,3,4,5,6,7,8,9,10,11,12,13] is a sequence (i.e. iterator). So > when NumPy iterates the sequence, the iterator yields a single integer, > lets call it I. Using this integer as an index to h, gives a = h[0,:,I] > which has shape=(4,). This gives us a sequence of arrays of length 4. In If you follow this analogy, x = N.arange(100).reshape((10,10)) x[:,N.arange(5)].shape should be (5, 10), while in reality it is (10, 5). Cheers St?fan From sturla at molden.no Tue Jun 19 08:05:35 2007 From: sturla at molden.no (Sturla Molden) Date: Tue, 19 Jun 2007 14:05:35 +0200 Subject: [Numpy-discussion] Is this an indexing bug? In-Reply-To: <20070619112852.GJ20362@mentat.za.net> References: <1181739407.629903.108000@i38g2000prf.googlegroups.com> <4673F889.1020902@gmx.net> <4677ADCD.2020606@gmx.net> <4677AC7E.5050704@molden.no> <4677B159.9030900@molden.no> <20070619112852.GJ20362@mentat.za.net> Message-ID: <4677C68F.7070409@molden.no> On 6/19/2007 1:28 PM, Stefan van der Walt wrote: > > x = N.arange(100).reshape((10,10)) > x[:,N.arange(5)].shape > > should be (5, 10), while in reality it is (10, 5). >>> y = numpy.arange(100).reshape((10,10)) >>> y[:,numpy.arange(5)].shape (10,5) >>> x = numpy.arange(100).reshape((1,10,10)) >>> x[0,:,numpy.arange(5)].shape (5,10) hm... Sturla Molden From sturla at molden.no Tue Jun 19 08:15:53 2007 From: sturla at molden.no (Sturla Molden) Date: Tue, 19 Jun 2007 14:15:53 +0200 Subject: [Numpy-discussion] Is this an indexing bug? In-Reply-To: <4677C68F.7070409@molden.no> References: <1181739407.629903.108000@i38g2000prf.googlegroups.com> <4673F889.1020902@gmx.net> <4677ADCD.2020606@gmx.net> <4677AC7E.5050704@molden.no> <4677B159.9030900@molden.no> <20070619112852.GJ20362@mentat.za.net> <4677C68F.7070409@molden.no> Message-ID: <4677C8F9.9030308@molden.no> >>> x = numpy.arange(100).reshape((1,10,10)) >>> x[0,:,numpy.arange(5)].shape (5, 10) >>> x[:,:,numpy.arange(5)].shape (1, 10, 5) It looks like a bug that needs to be squashed. S.M. From svetosch at gmx.net Tue Jun 19 13:13:05 2007 From: svetosch at gmx.net (Sven Schreiber) Date: Tue, 19 Jun 2007 18:13:05 +0100 Subject: [Numpy-discussion] Is this an indexing bug? In-Reply-To: <4677C8F9.9030308@molden.no> References: <1181739407.629903.108000@i38g2000prf.googlegroups.com> <4673F889.1020902@gmx.net> <4677ADCD.2020606@gmx.net> <4677AC7E.5050704@molden.no> <4677B159.9030900@molden.no> <20070619112852.GJ20362@mentat.za.net> <4677C68F.7070409@molden.no> <4677C8F9.9030308@molden.no> Message-ID: <46780EA1.6080504@gmx.net> Sturla Molden schrieb: > > >>> x = numpy.arange(100).reshape((1,10,10)) > > >>> x[0,:,numpy.arange(5)].shape > (5, 10) > > >>> x[:,:,numpy.arange(5)].shape > (1, 10, 5) > > > It looks like a bug that needs to be squashed. > > S.M. And you already had me convinced ;-) I'm still curious which one's the bug and which one is expected... Anybody? -sven From theller at ctypes.org Tue Jun 19 14:01:00 2007 From: theller at ctypes.org (Thomas Heller) Date: Tue, 19 Jun 2007 20:01:00 +0200 Subject: [Numpy-discussion] Buildbot for numpy In-Reply-To: <20070616081155.GC20362@mentat.za.net> References: <20070616081155.GC20362@mentat.za.net> Message-ID: Stefan van der Walt schrieb: > http://buildbot.scipy.org > > If your platform is not currently on the list, please consider > volunteering a machine as a build slave. This machine will be > required to run the buildbot client, and to build a new version of > numpy whenever changes are made to the repository. (The machine does > not have to be dedicated to this task, and can be your own > workstation.) I have a windows XP 64-bit machine (a VMWare image, actually) that is only used as a buildbot client for python itself. I it helps, and if it isn't too much work for me I offer to install a numpy buildbot client on it. It has Visual Studio 2003 and 2005 installed, MS Platform SDK, and for the buildbot python 2.4 (32-bit), buildbot (0.7.5 ?), and twisted. Thomas From wright at esrf.fr Tue Jun 19 14:08:36 2007 From: wright at esrf.fr (Jon Wright) Date: Tue, 19 Jun 2007 20:08:36 +0200 Subject: [Numpy-discussion] Radix sort? In-Reply-To: References: Message-ID: <46781BA4.1010507@esrf.fr> Dear numpy experts, I see from the docs that there seem to be 3 sorting algorithms for array data (quicksort, mergesort and heapsort). After hearing a rumour about radix sorts and floats I google'd and now I'm wondering about a radix sort for numpy (and Numeric) scalars? See: http://www.stereopsis.com/radix.html http://en.wikipedia.org/wiki/Radix_sort The algorithm is apparently a trick where no comparisons are used. A shockingly bad benchmark of a swig wrapped test function below suggests it really is quicker than the array.sort() numpy method for uint8. At least with >256 element uint8 test arrays (numpy1.0.3, mingw32, winXP), for me, today; of course ymmv. With large enough arrays of all of the integer numeric types and also ieee reals, appropriate versions of the radix sort might be able to: "... kick major ass in the speed department." [http://www.cs.berkeley.edu/~jrs/61b/lec/36] Have these sorts already been implemented in scipy? Can someone share some expertise in this area? There is a problem about the sorting not being done in-place (eg: better for argsort than sort). I see there is a lexsort, but it is not 100% obvious to me how to use that to sort scalars, especially ieee floats. If numpy is a library for handling big chunks of numbers with knowable binary representations, I guess there might be some general interest in having this radix sort compiled for the platforms where it works? Thanks for any opinions, Jon --- ==setup.py== from distutils.core import setup, Extension e = Extension("_sort_radix_uint8",sources=["radix_wrap.c"]) setup(ext_modules=[e]) ==radix.i== %module sort_radix_uint8 %{ #define SWIG_FILE_WITH_INIT void sort_radix_uint8(unsigned char * ar, int len){ int hits[256]; int i, j; for(i=0;i<256;i++) hits[i]=0; for(i=0;i0){ /* shortcut for uint8 */ ar[j++] = (unsigned char) i; hits[i]--; } i++; } } %} %init %{ import_array(); %} %include "numpy.i" %apply (unsigned char* INPLACE_ARRAY1, int DIM1) { (unsigned char * ar, int len)} void sort_radix_uint8(unsigned char * ar, int len); == test.py == import numpy,sort_radix_uint8 from time import clock def clearcache(): t = numpy.random.random(1024*1024) t=t*t def test(i): t = numpy.random.random(i) a1 = (t*255).astype(numpy.uint8) a2 = (t*255).astype(numpy.uint8) clearcache() tick=clock() a1.sort() t1 = clock()-tick clearcache() tick = clock() sort_radix_uint8.sort_radix_uint8(a2) t2 = clock() - tick assert a1.all() == a2.all() clearcache() tick=clock() a1.sort() # already sorted... t3 = clock()-tick return t1,t2,t3 for j in range(25): r = test(pow(2,j)) print numpy.argmin(r), j,pow(2,j),r From charlesr.harris at gmail.com Tue Jun 19 16:29:05 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 19 Jun 2007 14:29:05 -0600 Subject: [Numpy-discussion] Radix sort? In-Reply-To: <46781BA4.1010507@esrf.fr> References: <46781BA4.1010507@esrf.fr> Message-ID: On 6/19/07, Jon Wright wrote: > > Dear numpy experts, > > I see from the docs that there seem to be 3 sorting algorithms for array > data (quicksort, mergesort and heapsort). After hearing a rumour about > radix sorts and floats I google'd and now I'm wondering about a radix > sort for numpy (and Numeric) scalars? See: > > http://www.stereopsis.com/radix.html > http://en.wikipedia.org/wiki/Radix_sort > > The algorithm is apparently a trick where no comparisons are used. A > shockingly bad benchmark of a swig wrapped test function below suggests > it really is quicker than the array.sort() numpy method for uint8. At > least with >256 element uint8 test arrays (numpy1.0.3, mingw32, winXP), > for me, today; of course ymmv. > > With large enough arrays of all of the integer numeric types and also > ieee reals, appropriate versions of the radix sort might be able to: > "... kick major ass in the speed department." > [http://www.cs.berkeley.edu/~jrs/61b/lec/36] > > Have these sorts already been implemented in scipy? Can someone share > some expertise in this area? There is a problem about the sorting not > being done in-place (eg: better for argsort than sort). I see there is a > lexsort, but it is not 100% obvious to me how to use that to sort > scalars, especially ieee floats. If numpy is a library for handling big > chunks of numbers with knowable binary representations, I guess there > might be some general interest in having this radix sort compiled for > the platforms where it works? > > Thanks for any opinions, > > Jon > --- Straight radix sort might be an interesting option for some things. However, its performance can depend on whether the input data is random or not and it takes up more space than merge sort. Other potential drawbacks arise from the bit twiddling needed for signed numbers and floats, the former solved by converting to offset binary numbers (complement the sign bit), and the latter in the way your links indicate, but both leading to a proliferation of special cases. Maintaining byte order and byte addressing portability between cpu architectures might also require masking and shifting that will add computational expense and may lead to more special cases for extended precision floats and so on. That said, I would be curious to see how it works out if you want to give it a try. ==setup.py== > from distutils.core import setup, Extension > e = Extension("_sort_radix_uint8",sources=["radix_wrap.c"]) > setup(ext_modules=[e]) > > ==radix.i== > %module sort_radix_uint8 > %{ > #define SWIG_FILE_WITH_INIT > void sort_radix_uint8(unsigned char * ar, int len){ > int hits[256]; > int i, j; > for(i=0;i<256;i++) hits[i]=0; > for(i=0;i i=0; j=0; > while(i<256){ > while(hits[i]>0){ > /* shortcut for uint8 */ > ar[j++] = (unsigned char) i; > hits[i]--; > } > i++; > } > } > %} > %init %{ > import_array(); > %} > %include "numpy.i" > %apply (unsigned char* INPLACE_ARRAY1, int DIM1) { (unsigned char * ar, > int len)} > void sort_radix_uint8(unsigned char * ar, int len); You can't use this method for longer keys, you will need to copy over the whole number. Sedgewick's Algorithms in C++ has a discussion and implementation of the radix sort. And, of course, Knuth also. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Tue Jun 19 19:25:22 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 19 Jun 2007 17:25:22 -0600 Subject: [Numpy-discussion] Bugfix for numpy.info bug Message-ID: Bug === In [8]: N.info(N.ones(3)) class: ndarray shape: (3,) strides: (8,) itemsize: 8 aligned: True contiguous: True fortran: True --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/fperez/research/code/mwadap-stable/mwadap/test/ in () /home/fperez/usr/opt/lib/python2.5/site-packages/numpy/lib/utils.py in info(object, maxwidth, output, toplevel) 298 elif isinstance(object, ndarray): 299 import numpy.numarray as nn --> 300 nn.info(object, output=output, numpy=1) 301 elif isinstance(object, str): 302 if _namedict is None: /home/fperez/usr/opt/lib/python2.5/site-packages/numpy/numarray/functions.py in info(obj, output, numpy) 377 extra = "" 378 tic = "" --> 379 print >> output, "data pointer: %s%s" % (hex(obj.ctypes._as_parameter_), extra) 380 print >> output, "byteorder: ", 381 endian = obj.dtype.byteorder TypeError: hex() argument can't be converted to hex In [9]: debug > /home/fperez/usr/opt/lib/python2.5/site-packages/numpy/numarray/functions.py(379)info() 378 tic = "" --> 379 print >> output, "data pointer: %s%s" % (hex(obj.ctypes._as_parameter_), extra) 380 print >> output, "byteorder: ", Fix === planck[numpy]> svn diff Index: numpy/numarray/functions.py =================================================================== --- numpy/numarray/functions.py (revision 3874) +++ numpy/numarray/functions.py (working copy) @@ -376,7 +376,7 @@ else: extra = "" tic = "" - print >> output, "data pointer: %s%s" % (hex(obj.ctypes._as_parameter_), extra) + print >> output, "data pointer: %s%s" % (hex(obj.ctypes._as_parameter_.value), extra) print >> output, "byteorder: ", endian = obj.dtype.byteorder if endian in ['|','=']: Question ======== any objection if I commit this? Since I don't really touch the codebase often, I'd rather ask the real core people. I also don't know if it's really the right thing to do, I just tabbed into the object and picked what seemed to be the most reasonable answer. It's trivial, but I'd rather double check. Cheers, f From robert.kern at gmail.com Tue Jun 19 19:53:52 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 19 Jun 2007 18:53:52 -0500 Subject: [Numpy-discussion] Bugfix for numpy.info bug In-Reply-To: References: Message-ID: <46786C90.3030701@gmail.com> Fernando Perez wrote: > Question > ======== > > any objection if I commit this? Since I don't really touch the > codebase often, I'd rather ask the real core people. I also don't > know if it's really the right thing to do, I just tabbed into the > object and picked what seemed to be the most reasonable answer. It's > trivial, but I'd rather double check. Go for it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Tue Jun 19 20:30:40 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 19 Jun 2007 18:30:40 -0600 Subject: [Numpy-discussion] Bugfix for numpy.info bug In-Reply-To: <46786C90.3030701@gmail.com> References: <46786C90.3030701@gmail.com> Message-ID: On 6/19/07, Robert Kern wrote: > Go for it. Done, thanks. Cheers, f From torgil.svensson at gmail.com Wed Jun 20 04:35:49 2007 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Wed, 20 Jun 2007 10:35:49 +0200 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour Message-ID: Hi Is there a reason for numpy.float not to convert it's own string representation correctly? Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on win32>>> import numpy >>> numpy.__version__ '1.0.3' >>> numpy.float("1.0") 1.0 >>> numpy.nan -1.#IND >>> numpy.float("-1.#IND") Traceback (most recent call last): File "", line 1, in numpy.float("-1.#IND") ValueError: invalid literal for float(): -1.#IND >>> Also, nan and -nan are represented differently for different float to string conversion methods. I guess the added zeros are a bug somewhere. >>> str(nan) '-1.#IND' >>> "%f" % nan '-1.#IND00' >>> str(-nan) '1.#QNAN' >>> "%f" % -nan '1.#QNAN0' This is a problem when floats are stored in text-files that are later read to be numerically processed. For now I use the following to convert the number. special_numbers=dict([('-1.#INF',-inf),('1.#INF',inf), ('-1.#IND',nan),('-1.#IND00',nan), ('1.#QNAN',-nan),('1.#QNAN0',-nan)]) def string_to_number(x): if x in special_numbers: return special_numbers[x] return float(x) if ("." in x) or ("e" in x) else int(x) Is there a simpler way that I missed? Best Regards, //Torgil From mforbes at physics.ubc.ca Wed Jun 20 04:38:56 2007 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Wed, 20 Jun 2007 01:38:56 -0700 Subject: [Numpy-discussion] Suppressing "nesting" (recursion, descent) in array construction. References: Message-ID: <980D0DC5-4D71-4AD2-8847-F5F758585343@physics.ubc.ca> Hi, I have a list of tuples that I am using as keys and I would like to sort this along with some other arrays using argsort. How can I do this? I would like to do something like: # These are constructed using lists because they accumulate using append() data = [1.0, 3,0] keys = [('a',1),('b',2)] # Convert to arrays for indexing data = array(data1) keys = array(keys) # <--Converts to a 2d array rather than 1d array of tuples . inds = argsort(data) data[:] = data[inds] keys[:] = keys[inds] It seems there should be some way of specifying to the array constructor not to 'descend' (perhaps by specifying the desired dimensions of the final array or something) but I cannot find a nice way around this. Any suggestions? Thanks, Michael. From matthieu.brucher at gmail.com Wed Jun 20 04:45:14 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 20 Jun 2007 10:45:14 +0200 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: References: Message-ID: Hi, This was discussed some time ago (I started it because I had exactly the same problem), numpy is not responsible for this, Python is. Python uses the C standard library and in C by MS, NaN and Inf can be displayed, but not read from a string, so this is the behaviour displayed here. Wait for Python 3k... Matthieu 2007/6/20, Torgil Svensson : > > Hi > > Is there a reason for numpy.float not to convert it's own string > representation correctly? > > Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit > (Intel)] on win32>>> import numpy > >>> numpy.__version__ > '1.0.3' > >>> numpy.float("1.0") > 1.0 > >>> numpy.nan > -1.#IND > >>> numpy.float("-1.#IND") > Traceback (most recent call last): > File "", line 1, in > numpy.float("-1.#IND") > ValueError: invalid literal for float(): -1.#IND > >>> > > Also, nan and -nan are represented differently for different float to > string conversion methods. I guess the added zeros are a bug > somewhere. > >>> str(nan) > '-1.#IND' > >>> "%f" % nan > '-1.#IND00' > >>> str(-nan) > '1.#QNAN' > >>> "%f" % -nan > '1.#QNAN0' > > This is a problem when floats are stored in text-files that are later > read to be numerically processed. For now I use the following to > convert the number. > > special_numbers=dict([('-1.#INF',-inf),('1.#INF',inf), > ('-1.#IND',nan),('-1.#IND00',nan), > ('1.#QNAN',-nan),('1.#QNAN0',-nan)]) > def string_to_number(x): > if x in special_numbers: > return special_numbers[x] > return float(x) if ("." in x) or ("e" in x) else int(x) > > Is there a simpler way that I missed? > > Best Regards, > > //Torgil > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wright at esrf.fr Wed Jun 20 05:58:02 2007 From: wright at esrf.fr (Jon Wright) Date: Wed, 20 Jun 2007 11:58:02 +0200 Subject: [Numpy-discussion] Radix sort? In-Reply-To: References: Message-ID: <4678FA2A.2030104@esrf.fr> > "Charles R Harris" wrote: > > Straight radix sort might be an interesting option for some things. > However, its performance can depend on whether the input data is random or > not and it takes up more space than merge sort. Other potential drawbacks > arise from the bit twiddling needed for signed numbers and floats, the > former solved by converting to offset binary numbers (complement the sign > bit), and the latter in the way your links indicate, but both leading to a > proliferation of special cases. Maintaining byte order and byte addressing > portability between cpu architectures might also require masking and > shifting that will add computational expense and may lead to more special > cases for extended precision floats and so on. That said, I would be curious > to see how it works out if you want to give it a try. I agree completely about the proliferation of special cases, and mess that will make. This radix sort is bad for tiny arrays, but good for big random ones (there is no insertion sort either?). An intelligent algorithm chooser is needed, something like an "atlas"/"fftw" but for sorting, which has been invented already it seems. Platform and datatype and even the data themselves seem to be important. eg: http://www.spiral.net/software/sorting.html http://www.cgo.org/cgo2004/papers/09_cgo04.pdf Seems like a significant amount of work - and for the numpy case it should work with strided/sliced arrays without copying. Is there a list somewhere of the numpy numeric types, together with their binary representations on all of the numpy supported platforms? I'm guessing integers are almost always: [signed/unsigned] [8|16|32|64 bits] [big|little endian] ... and that many popular platforms only use ieee745 floats and doubles, which might be big or little endian. Is there an important case I miss, such as decimal hardware? The experimental observation is that the code from Michael Herf appears to win for float32 for random arrays >1024 elements or sorted arrays >2M elements, compared any of the 3 algorithms already in numpy (ymmv). The methods could also have a positive impact on the numpy.histogram function for the smaller data types, and also lead to other order statistics, like median, trimeans and n-th largest with a performance improvement. Since sorting is one of the most studied problems in all of computer science, I am reluctant to start writing a new library! Does anyone know of some good libraries we could try out? Best, Jon From faltet at carabos.com Wed Jun 20 07:57:42 2007 From: faltet at carabos.com (Francesc Altet) Date: Wed, 20 Jun 2007 13:57:42 +0200 Subject: [Numpy-discussion] Suppressing "nesting" (recursion, descent) in array construction. In-Reply-To: <980D0DC5-4D71-4AD2-8847-F5F758585343@physics.ubc.ca> References: <980D0DC5-4D71-4AD2-8847-F5F758585343@physics.ubc.ca> Message-ID: <1182340662.2709.11.camel@carabos.com> El dc 20 de 06 del 2007 a les 01:38 -0700, en/na Michael McNeil Forbes va escriure: > Hi, > > I have a list of tuples that I am using as keys and I would like to > sort this along with some other arrays using argsort. How can I do > this? I would like to do something like: > > # These are constructed using lists because they accumulate using > append() > data = [1.0, 3,0] > keys = [('a',1),('b',2)] > > # Convert to arrays for indexing > data = array(data1) > keys = array(keys) # <--Converts to a 2d array rather than 1d array > of tuples . > > inds = argsort(data) > data[:] = data[inds] > keys[:] = keys[inds] > > It seems there should be some way of specifying to the array > constructor not to 'descend' (perhaps by specifying the desired > dimensions of the final array or something) but I cannot find a nice > way around this. Here is a possible approach using recarrays: In [54]:data = [3.0, 1.0] In [55]:keys = [('a',1),('b',2)] In [56]:tmp=numpy.array(keys, dtype="S1,i4") In [57]:dest=numpy.empty(shape=len(keys), dtype="S1,i4,f8") In [58]:dest['f0'] = tmp['f0'] In [59]:dest['f1'] = tmp['f1'] In [60]:dest['f2'] = data In [61]:dest Out[61]: array([('a', 1, 3.0), ('b', 2, 1.0)], dtype=[('f0', '|S1'), ('f1', ' References: <4678FA2A.2030104@esrf.fr> Message-ID: On 6/20/07, Jon Wright wrote: > > > "Charles R Harris" wrote: > > > > Straight radix sort might be an interesting option for some things. > > However, its performance can depend on whether the input data is random > or > > not and it takes up more space than merge sort. Other potential > drawbacks > > arise from the bit twiddling needed for signed numbers and floats, the > > former solved by converting to offset binary numbers (complement the > sign > > bit), and the latter in the way your links indicate, but both leading to > a > > proliferation of special cases. Maintaining byte order and byte > addressing > > portability between cpu architectures might also require masking and > > shifting that will add computational expense and may lead to more > special > > cases for extended precision floats and so on. That said, I would be > curious > > to see how it works out if you want to give it a try. > > I agree completely about the proliferation of special cases, and mess > that will make. This radix sort is bad for tiny arrays, but good for big > random ones (there is no insertion sort either?). An intelligent Quicksort and mergesort go over to insertion sort when the size of the partitions fall below 15 and 20 elements respectively. But you are right, there is no insertion sort per se. There is no shell sort either, and for small arrays that could be effective, although call and array creation overhead is probably the dominant factor there. algorithm chooser is needed, something like an "atlas"/"fftw" but for > sorting, which has been invented already it seems. Platform and datatype > and even the data themselves seem to be important. eg: > > http://www.spiral.net/software/sorting.html > http://www.cgo.org/cgo2004/papers/09_cgo04.pdf > > Seems like a significant amount of work - and for the numpy case it > should work with strided/sliced arrays without copying. Is there a list We *do* copy noncontiguous data back and forth at the moment. Fixing that should not be too difficult and might speed up the sorting of arrays on axes other than -1, although cache effects might dominate. Having the data in cache speeds things up enormously, maybe 5x. That is where the recent FFTW stuff and ATLAS make their gains. You can find the current low level sorting code in numpy/numpy/core/src/_sortmodule.c.src, but you will want to look higher up the hierarchy of calls to find where the data is set up before sorting. somewhere of the numpy numeric types, together with their binary > representations on all of the numpy supported platforms? I'm guessing > integers are almost always: > [signed/unsigned] [8|16|32|64 bits] [big|little endian] > ... and that many popular platforms only use ieee745 floats and doubles, > which might be big or little endian. Is there an important case I miss, > such as decimal hardware? I think you are safe assuming binary/twos-complement hardware and ieee745 for the moment. There are also extended precision 80 bit floats that turn up as 80/96/128 bits on different platforms because of address alignment considerations. PPC doesn't (didn't?) have ieee extended precision but has a unique implementation using two doubles, so you could probably just ignore that case and raise an error. There are also 16bit floats and quad precision floats, but I wouldn't worry about those yet. Perhaps it would be best to start with the (unsigned) integers and see how much the radix sort buys you. For image data that might be enough, and images tend to contain a lot of data, although only the least significant bits are truly random. The experimental observation is that the code from Michael Herf appears > to win for float32 for random arrays >1024 elements or sorted arrays >2M > elements, compared any of the 3 algorithms already in numpy (ymmv). The > methods could also have a positive impact on the numpy.histogram > function for the smaller data types, and also lead to other order > statistics, like median, trimeans and n-th largest with a performance > improvement. Special fast algorithms for means and n-th largest could be useful depending on how often people use them. Of the current sorting routines, I think quicksort is most commonly used, followed by mergesort. I don't know of anyone who uses heapsort. Heapsort restricted to just building a heap might have some use for getting the n largest items in big arrays. That would buy you about 50% in execution time, making it a little bit better than a complete quicksort. Since sorting is one of the most studied problems in all of computer > science, I am reluctant to start writing a new library! Does anyone know > of some good libraries we could try out? Apart from the standards that we have, sorting tends to be adapted to particular circumstances. Another thing to think about is the sorting of really big data sets, maybe memmapped arrays where disk access will be a dominant feature. I don't think any of the current algorithms are really suitable for that case. Maybe one of the folks using 12 GB data arrays can comment? Such an algorithm might also speed the sorting of smaller arrays due to cache effects. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Jun 20 12:00:49 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 20 Jun 2007 10:00:49 -0600 Subject: [Numpy-discussion] Suppressing "nesting" (recursion, descent) in array construction. In-Reply-To: <980D0DC5-4D71-4AD2-8847-F5F758585343@physics.ubc.ca> References: <980D0DC5-4D71-4AD2-8847-F5F758585343@physics.ubc.ca> Message-ID: On 6/20/07, Michael McNeil Forbes wrote: > > Hi, > > I have a list of tuples that I am using as keys and I would like to > sort this along with some other arrays using argsort. How can I do > this? I would like to do something like: You might want the keys in an object array, otherwise you end up with strings for all the values. Why are they keys? Do you want to sort on them also? Anyway, if you use take(keys, inds, axis=0) you will get an array with the rows containing the keys rearranged as I think you want. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Wed Jun 20 14:23:43 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 20 Jun 2007 12:23:43 -0600 Subject: [Numpy-discussion] Is this an indexing bug? In-Reply-To: <20070619112852.GJ20362@mentat.za.net> References: <1181739407.629903.108000@i38g2000prf.googlegroups.com> <4673F889.1020902@gmx.net> <4677ADCD.2020606@gmx.net> <4677AC7E.5050704@molden.no> <4677B159.9030900@molden.no> <20070619112852.GJ20362@mentat.za.net> Message-ID: <467970AF.2070503@ieee.org> Stefan van der Walt wrote: > On Tue, Jun 19, 2007 at 12:35:05PM +0200, Sturla Molden wrote: > >> On 6/19/2007 12:14 PM, Sturla Molden wrote: >> >> >>> h[0,:,numpy.arange(14)] is a case of "sdvanced indexing". You can also >>> see that >>> >>> >>> h[0,:,[0,1,2,3,4,5,6,7,8,9,10,11,12,13]].shape >>> (14, 4) >>> >> Another way to explain this is that numpy.arange(14) and >> [0,1,2,3,4,5,6,7,8,9,10,11,12,13] is a sequence (i.e. iterator). So >> when NumPy iterates the sequence, the iterator yields a single integer, >> lets call it I. Using this integer as an index to h, gives a = h[0,:,I] >> which has shape=(4,). This gives us a sequence of arrays of length 4. In >> > > If you follow this analogy, > > x = N.arange(100).reshape((10,10)) > x[:,N.arange(5)].shape > > should be (5, 10), while in reality it is (10, 5). > No, in this case, there is no ambiguity regarding where to put the sub-space, so it is put in the "expected" position. It could be argued that when a single integer is used in one of the indexing dimensions then there is also no ambiguity --- but the indexing code does not check for that special case. There is no bug here as far as I can tell. It is just perhaps somewhat unexpected behavior of a general rule about how "indirect" or "advanced" indexing is handled. You can always do h[0][:,arange(14)] to get the result you seem to want. -Travis > Cheers > St?fan > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > From oliphant.travis at ieee.org Wed Jun 20 14:38:17 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 20 Jun 2007 12:38:17 -0600 Subject: [Numpy-discussion] Is this an indexing bug? In-Reply-To: <4677AC7E.5050704@molden.no> References: <1181739407.629903.108000@i38g2000prf.googlegroups.com> <4673F889.1020902@gmx.net> <4677ADCD.2020606@gmx.net> <4677AC7E.5050704@molden.no> Message-ID: <46797419.9080802@ieee.org> Sturla Molden wrote: > On 6/19/2007 12:19 PM, Sven Schreiber wrote: > > >> To be more specific, I would expect shape==(4,14). >> > > > >>> h = numpy.zeros((1,4,14)) > >>> h[0,:,numpy.arange(14)].shape > (14, 4) > >>> h[0,:,:].shape > (4, 14) > >>> > > > h[0,:,numpy.arange(14)] is a case of "sdvanced indexing". You can also > see that > > >>> h[0,:,[0,1,2,3,4,5,6,7,8,9,10,11,12,13]].shape > (14, 4) > > Citing from Travis' book, page 83: > > "Example 2: Now let X.shape be (10,20,30,40,50) and suppose ind1 and > ind2 are broadcastable to the shape (2,3,4). Then X[:,ind1,ind2] has > shape (10,2,3,4,40,50) because the (20,30)-shaped subspace from X has > been replaced with the (2,3,4) subspace from the indices. However, > X[:,ind1,:,ind2,:] has shape (2,3,4,10,30,50) because there is no > unambiguous place to drop in the indexing subspace, thus it is tacked-on > to the beginning. It is always possible to use .transpose() to move the > sups pace anywhere desired. This example cannot be replicated using take." > > > So I think this strange behaviour is actually correct. > > Yes, Stuart is right. Now, obviously, "advanced" indexing was not designed with this particular case in mind. But, it is the expected outcome given the rule. Let's look at the application of the rule to this particular case. h is a (1,4,10) array. Now, ind1 is 0 and ind2 is [0,1,...,13]. The rules for "advanced" indexing apply because ind2 is a list. Thus, ind1 is broadcast to ind2 which means ind1 acts as if it were [0,0,...,0]. So, the indexing implies an extraction of a (14,)-shaped array from the (1,10)-shaped array. Now, where should this (14,)-shaped array be attached in the result. The rule states that if ind1 and ind2 are next to each other then it will replace the (1,10)-shaped portion of the array. In this case, however, they are not right next to each other. Thus, there is an ambiguity regarding where to place the (14,)-shaped array. The rule states that when this kind of ambiguity arises (notice there is no special-case checking to see if ind1 or ind2 comes from a scalar), the resulting sub-space is placed at the beginning. Thus, the (1,4,10)-shaped array becomes a (14,4)-shaped array on selection using h[ind1,:,ind2] This behavior follows the rule, precisely. Admittedly it is a bit unexpected in this instance, but it does follow a specific rule that can be explained once you understand it, and generalizes to all kinds of crazy situations where it is more difficult to see what the behavior "should" be. One may complain that h[ind1][:,ind2] != h[ind1,:,ind2] but that is generally true when using slices or lists for indexing. -Travis From oliphant.travis at ieee.org Wed Jun 20 14:43:35 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 20 Jun 2007 12:43:35 -0600 Subject: [Numpy-discussion] Is this an indexing bug? In-Reply-To: <4677C8F9.9030308@molden.no> References: <1181739407.629903.108000@i38g2000prf.googlegroups.com> <4673F889.1020902@gmx.net> <4677ADCD.2020606@gmx.net> <4677AC7E.5050704@molden.no> <4677B159.9030900@molden.no> <20070619112852.GJ20362@mentat.za.net> <4677C68F.7070409@molden.no> <4677C8F9.9030308@molden.no> Message-ID: <46797557.4090507@ieee.org> Sturla Molden wrote: > >>> x = numpy.arange(100).reshape((1,10,10)) > > >>> x[0,:,numpy.arange(5)].shape > (5, 10) > > >>> x[:,:,numpy.arange(5)].shape > (1, 10, 5) > > > It looks like a bug that needs to be squashed. > These are both correct. See my previous posts about the rule. The first case is exactly the example we saw before: we start with a (1,10,10)-shaped array and replace the first and last-dimension (1,10)-shaped array with a (5,)-shaped array. Not having a clear place to put the extracted (5,)-shaped subspace, it is tacked on to the front. In the second example, the last-dimension (10,)-shaped sub-space is replaced with a (5,)-shaped sub-space. There is no ambiguity in this case and the result is a (1,10,5)-shaped array. There is no bug here. Perhaps unexpected behavior with "advanced indexing" combined with single-integer indexing in separated dimensions but no bug. The result, does follow an understandable and generalizable rule. In addition, you can get what you seem to want very easily using x[0][:,numpy.arange(5)] -Travis From svetosch at gmx.net Wed Jun 20 16:30:12 2007 From: svetosch at gmx.net (Sven Schreiber) Date: Wed, 20 Jun 2007 21:30:12 +0100 Subject: [Numpy-discussion] Is this an indexing bug? In-Reply-To: <46797557.4090507@ieee.org> References: <1181739407.629903.108000@i38g2000prf.googlegroups.com> <4673F889.1020902@gmx.net> <4677ADCD.2020606@gmx.net> <4677AC7E.5050704@molden.no> <4677B159.9030900@molden.no> <20070619112852.GJ20362@mentat.za.net> <4677C68F.7070409@molden.no> <4677C8F9.9030308@molden.no> <46797557.4090507@ieee.org> Message-ID: <46798E54.2040004@gmx.net> Travis Oliphant schrieb: > Sturla Molden wrote: >> >>> x = numpy.arange(100).reshape((1,10,10)) >> >> >>> x[0,:,numpy.arange(5)].shape >> (5, 10) >> >> >>> x[:,:,numpy.arange(5)].shape >> (1, 10, 5) >> >> >> It looks like a bug that needs to be squashed. >> > > These are both correct. See my previous posts about the rule. > > The first case is exactly the example we saw before: we start with a > (1,10,10)-shaped array and replace the first and last-dimension > (1,10)-shaped array with a (5,)-shaped array. Not having a clear place > to put the extracted (5,)-shaped subspace, it is tacked on to the front. > > In the second example, the last-dimension (10,)-shaped sub-space is > replaced with a (5,)-shaped sub-space. There is no ambiguity in this > case and the result is a (1,10,5)-shaped array. > > There is no bug here. Perhaps unexpected behavior with "advanced > indexing" combined with single-integer indexing in separated dimensions > but no bug. The result, does follow an understandable and generalizable > rule. > > Thanks for the explanation. Maybe some of these examples could be added to the relevant section of the book. Although I must say I'm glad that I currently don't need to use this stuff, I'm not sure I would get it right :-) cheers, sven From sturla at molden.no Wed Jun 20 15:35:19 2007 From: sturla at molden.no (Sturla Molden) Date: Wed, 20 Jun 2007 21:35:19 +0200 (CEST) Subject: [Numpy-discussion] Is this an indexing bug? In-Reply-To: <46797557.4090507@ieee.org> References: <1181739407.629903.108000@i38g2000prf.googlegroups.com> <4673F889.1020902@gmx.net> <4677ADCD.2020606@gmx.net> <4677AC7E.5050704@molden.no> <4677B159.9030900@molden.no> <20070619112852.GJ20362@mentat.za.net> <4677C68F.7070409@molden.no> <4677C8F9.9030308@molden.no> <46797557.4090507@ieee.org> Message-ID: <2195.89.8.15.221.1182368119.squirrel@webmail.uio.no> > These are both correct. See my previous posts about the rule. > > The first case is exactly the example we saw before: we start with a > (1,10,10)-shaped array and replace the first and last-dimension > (1,10)-shaped array with a (5,)-shaped array. Not having a clear place > to put the extracted (5,)-shaped subspace, it is tacked on to the front. > > In the second example, the last-dimension (10,)-shaped sub-space is > replaced with a (5,)-shaped sub-space. There is no ambiguity in this > case and the result is a (1,10,5)-shaped array. > > There is no bug here. Perhaps unexpected behavior with "advanced > indexing" combined with single-integer indexing in separated dimensions > but no bug. The result, does follow an understandable and generalizable > rule. Travis, I agree that there is no bug here, as the software follows the specified behaviour. But it may be debated whether the specified behaviour is sensible or not. I think the source of the confusion is the different behaviour of 'advanced indexing' in NumPy and Matlab. This is what Matlab does: >> x = zeros(1,10,10); >> size( x(1,:,[1 2 3 4 5]) ) ans = 1 10 5 >> size( x(:,:,[1 2 3 4 5]) ) ans = 1 10 5 I might be worth pointing this out in the NumPy documentation (on the SciPy web site and in your book), so users don't expect similar behaviour of 'advanced indexing' in NumPy and Matlab. Regards, Sturla Molden From mforbes at physics.ubc.ca Wed Jun 20 15:43:27 2007 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Wed, 20 Jun 2007 12:43:27 -0700 Subject: [Numpy-discussion] Suppressing "nesting" (recursion, descent) in array construction. In-Reply-To: <1182340662.2709.11.camel@carabos.com> References: <980D0DC5-4D71-4AD2-8847-F5F758585343@physics.ubc.ca> <1182340662.2709.11.camel@carabos.com> Message-ID: <605BFAE4-2FF2-4C6D-BA35-9DACAA8B4CA4@physics.ubc.ca> Hi, That is a little more complicated than I want, but it shows me the solution: Construct the array of the desired shape first, then fill it. data = [1.0, 3,0] keys = [('a',1),('b',2)] # Convert to arrays for indexing data_array = array(data1) key_array = empty(len(keys),dtype=tuple) key_array[:] = keys[:] inds = argsort(data) data_array[:] = data[inds] key_array[:] = keys[inds] Thanks! Michael. On 20 Jun 2007, at 4:57 AM, Francesc Altet wrote: > El dc 20 de 06 del 2007 a les 01:38 -0700, en/na Michael McNeil Forbes > va escriure: >> Hi, >> >> I have a list of tuples that I am using as keys and I would like to >> sort this along with some other arrays using argsort. How can I do >> this? I would like to do something like: >> >> # These are constructed using lists because they accumulate using >> append() >> data = [1.0, 3,0] >> keys = [('a',1),('b',2)] >> >> # Convert to arrays for indexing >> data = array(data1) >> keys = array(keys) # <--Converts to a 2d array rather than 1d array >> of tuples . >> >> inds = argsort(data) >> data[:] = data[inds] >> keys[:] = keys[inds] >> >> It seems there should be some way of specifying to the array >> constructor not to 'descend' (perhaps by specifying the desired >> dimensions of the final array or something) but I cannot find a nice >> way around this. > > Here is a possible approach using recarrays: > > In [54]:data = [3.0, 1.0] > In [55]:keys = [('a',1),('b',2)] > In [56]:tmp=numpy.array(keys, dtype="S1,i4") > In [57]:dest=numpy.empty(shape=len(keys), dtype="S1,i4,f8") > In [58]:dest['f0'] = tmp['f0'] > In [59]:dest['f1'] = tmp['f1'] > In [60]:dest['f2'] = data > In [61]:dest > Out[61]: > array([('a', 1, 3.0), ('b', 2, 1.0)], > dtype=[('f0', '|S1'), ('f1', ' In [62]:dest.sort(order='f2') > In [63]:dest > Out[63]: > array([('b', 2, 1.0), ('a', 1, 3.0)], > dtype=[('f0', '|S1'), ('f1', ' > If you want to retrieve the keys and data from the dest recarray > afterwards, that's easy: > > In [111]:data2=dest['f2'].tolist() > In [112]:keys2=dest.getfield('S1,i4').tolist() > In [113]:data2 > Out[113]:[1.0, 3.0] > In [114]:keys2 > Out[114]:[('b', 2), ('a', 1)] > > Cheers, > > -- > Francesc Altet | Be careful about using the following code -- > Carabos Coop. V. | I've only proven that it works, > www.carabos.com | I haven't tested it. -- Donald Knuth From mforbes at physics.ubc.ca Wed Jun 20 15:47:43 2007 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Wed, 20 Jun 2007 12:47:43 -0700 Subject: [Numpy-discussion] Suppressing "nesting" (recursion, descent) in array construction. In-Reply-To: References: <980D0DC5-4D71-4AD2-8847-F5F758585343@physics.ubc.ca> Message-ID: <1418339D-2A1B-4F26-AB1D-B3A5DC350F04@physics.ubc.ca> Using take or array or similar operations on the initial list descends ignoring the tuples and converting the list to a multiple- dimension array: >>> take(keys,[1,0],axis=0) array([['b', '2'], ['a', '1']], dtype='|S4') It is sorted as I want, but I can no-longer use them as keys. The problem can be solved by creating an empty array first, then copying. Thanks, Michael. > On 6/20/07, Michael McNeil Forbes wrote: > Hi, > > I have a list of tuples that I am using as keys and I would like to > sort this along with some other arrays using argsort. How can I do > this? I would like to do something like: > > You might want the keys in an object array, otherwise you end up > with strings for all the values. Why are they keys? Do you want to > sort on them also? Anyway, if you use take(keys, inds, axis=0) you > will get an array with the rows containing the keys rearranged as I > think you want. > > Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From faltet at carabos.com Thu Jun 21 03:43:52 2007 From: faltet at carabos.com (Francesc Altet) Date: Thu, 21 Jun 2007 09:43:52 +0200 Subject: [Numpy-discussion] Suppressing "nesting" (recursion, descent) in array construction. In-Reply-To: <605BFAE4-2FF2-4C6D-BA35-9DACAA8B4CA4@physics.ubc.ca> References: <980D0DC5-4D71-4AD2-8847-F5F758585343@physics.ubc.ca> <1182340662.2709.11.camel@carabos.com> <605BFAE4-2FF2-4C6D-BA35-9DACAA8B4CA4@physics.ubc.ca> Message-ID: <1182411832.2676.2.camel@carabos.com> El dc 20 de 06 del 2007 a les 12:43 -0700, en/na Michael McNeil Forbes va escriure: > Hi, > > That is a little more complicated than I want, but it shows me the > solution: Construct the array of the desired shape first, then fill it. > > data = [1.0, 3,0] > keys = [('a',1),('b',2)] > > # Convert to arrays for indexing > data_array = array(data1) > key_array = empty(len(keys),dtype=tuple) > key_array[:] = keys[:] the later two statements can also be written as: key_array = array(keys, dtype=tuple) > inds = argsort(data) > data_array[:] = data[inds] > key_array[:] = keys[inds] Yeah, much simpler than my first approach. Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From mforbes at physics.ubc.ca Thu Jun 21 09:24:30 2007 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Thu, 21 Jun 2007 06:24:30 -0700 Subject: [Numpy-discussion] Suppressing "nesting" (recursion, descent) in array construction. In-Reply-To: <1182411832.2676.2.camel@carabos.com> References: <980D0DC5-4D71-4AD2-8847-F5F758585343@physics.ubc.ca> <1182340662.2709.11.camel@carabos.com> <605BFAE4-2FF2-4C6D-BA35-9DACAA8B4CA4@physics.ubc.ca> <1182411832.2676.2.camel@carabos.com> Message-ID: <340483C7-114F-42A5-804F-D075DEBBC66E@physics.ubc.ca> >> key_array = empty(len(keys),dtype=tuple) >> key_array[:] = keys[:] > > the later two statements can also be written as: > > key_array = array(keys, dtype=tuple) These are not equivalent: >>> keys = [('a',1),('b',2)] >>> key_array = array(keys, dtype=tuple) >>> key_array array([[a, 1], [b, 2]], dtype=object) >>> key_array = empty(len(keys),dtype=tuple) >>> key_array[:] = keys[:] >>> key_array array([('a', 1), ('b', 2)], dtype=object) Thanks, Michael. From faltet at carabos.com Thu Jun 21 11:07:16 2007 From: faltet at carabos.com (Francesc Altet) Date: Thu, 21 Jun 2007 17:07:16 +0200 Subject: [Numpy-discussion] Suppressing "nesting" (recursion, descent) in array construction. In-Reply-To: <340483C7-114F-42A5-804F-D075DEBBC66E@physics.ubc.ca> References: <980D0DC5-4D71-4AD2-8847-F5F758585343@physics.ubc.ca> <1182340662.2709.11.camel@carabos.com> <605BFAE4-2FF2-4C6D-BA35-9DACAA8B4CA4@physics.ubc.ca> <1182411832.2676.2.camel@carabos.com> <340483C7-114F-42A5-804F-D075DEBBC66E@physics.ubc.ca> Message-ID: <1182438436.2676.23.camel@carabos.com> El dj 21 de 06 del 2007 a les 06:24 -0700, en/na Michael McNeil Forbes va escriure: > >> key_array = empty(len(keys),dtype=tuple) > >> key_array[:] = keys[:] > > > > the later two statements can also be written as: > > > > key_array = array(keys, dtype=tuple) > > These are not equivalent: > > >>> keys = [('a',1),('b',2)] > >>> key_array = array(keys, dtype=tuple) > >>> key_array > array([[a, 1], > [b, 2]], dtype=object) > > >>> key_array = empty(len(keys),dtype=tuple) > >>> key_array[:] = keys[:] > >>> key_array > array([('a', 1), ('b', 2)], dtype=object) Ops. You are right. I think I was fooled by the 'dtype=tuple' argument which is in fact equivalent to 'dtype=object'. -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From cookedm at physics.mcmaster.ca Thu Jun 21 15:44:11 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 21 Jun 2007 15:44:11 -0400 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: References: Message-ID: On Jun 20, 2007, at 04:35 , Torgil Svensson wrote: > Hi > > Is there a reason for numpy.float not to convert it's own string > representation correctly? numpy.float is the Python float type, so there's nothing we can do. I am working on adding NaN and Inf support for numpy dtypes, though, so that, for instance, numpy.float64('-1.#IND') would work as expected. I'll put it higher on my priority list :-) > Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit > (Intel)] on win32>>> import numpy >>>> numpy.__version__ > '1.0.3' >>>> numpy.float("1.0") > 1.0 >>>> numpy.nan > -1.#IND >>>> numpy.float("-1.#IND") > Traceback (most recent call last): > File "", line 1, in > numpy.float("-1.#IND") > ValueError: invalid literal for float(): -1.#IND >>>> > > Also, nan and -nan are represented differently for different float to > string conversion methods. I guess the added zeros are a bug > somewhere. >>>> str(nan) > '-1.#IND' >>>> "%f" % nan > '-1.#IND00' >>>> str(-nan) > '1.#QNAN' >>>> "%f" % -nan > '1.#QNAN0' > > This is a problem when floats are stored in text-files that are later > read to be numerically processed. For now I use the following to > convert the number. > > special_numbers=dict([('-1.#INF',-inf),('1.#INF',inf), > ('-1.#IND',nan),('-1.#IND00',nan), > ('1.#QNAN',-nan),('1.#QNAN0',-nan)]) > def string_to_number(x): > if x in special_numbers: > return special_numbers[x] > return float(x) if ("." in x) or ("e" in x) else int(x) > > Is there a simpler way that I missed? > > Best Regards, > > //Torgil > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 186 bytes Desc: This is a digitally signed message part URL: From torgil.svensson at gmail.com Thu Jun 21 17:45:07 2007 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Thu, 21 Jun 2007 23:45:07 +0200 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: References: Message-ID: Have you specific insights in Python 3k regarding this? I assume 3k is the next millenium. Maybe they can accept patches before that. //Torgil On 6/20/07, Matthieu Brucher wrote: > Hi, > > This was discussed some time ago (I started it because I had exactly the > same problem), numpy is not responsible for this, Python is. Python uses the > C standard library and in C by MS, NaN and Inf can be displayed, but not > read from a string, so this is the behaviour displayed here. > Wait for Python 3k... > > Matthieu > > 2007/6/20, Torgil Svensson : > > > > Hi > > > > Is there a reason for numpy.float not to convert it's own string > > representation correctly? > > > > Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit > > (Intel)] on win32>>> import numpy > > >>> numpy.__version__ > > '1.0.3' > > >>> numpy.float("1.0") > > 1.0 > > >>> numpy.nan > > -1.#IND > > >>> numpy.float("-1.#IND") > > Traceback (most recent call last): > > File "", line 1, in > > numpy.float("-1.#IND") > > ValueError: invalid literal for float(): -1.#IND > > >>> > > > > Also, nan and -nan are represented differently for different float to > > string conversion methods. I guess the added zeros are a bug > > somewhere. > > >>> str(nan) > > '-1.#IND' > > >>> "%f" % nan > > '-1.#IND00' > > >>> str(-nan) > > '1.#QNAN' > > >>> "%f" % -nan > > '1.#QNAN0' > > > > This is a problem when floats are stored in text-files that are later > > read to be numerically processed. For now I use the following to > > convert the number. > > > > special_numbers=dict([('-1.#INF',-inf),('1.#INF',inf), > > ('-1.#IND',nan),('-1.#IND00',nan), > > > ('1.#QNAN',-nan),('1.#QNAN0',-nan)]) > > def string_to_number(x): > > if x in special_numbers: > > return special_numbers[x] > > return float(x) if ("." in x) or ("e" in x) else int(x) > > > > Is there a simpler way that I missed? > > > > Best Regards, > > > > //Torgil > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > From torgil.svensson at gmail.com Thu Jun 21 17:48:38 2007 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Thu, 21 Jun 2007 23:48:38 +0200 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: References: Message-ID: On 6/21/07, David M. Cooke wrote: > On Jun 20, 2007, at 04:35 , Torgil Svensson wrote: > > > Hi > > > > Is there a reason for numpy.float not to convert it's own string > > representation correctly? > > numpy.float is the Python float type, so there's nothing we can do. I > am working on adding NaN and Inf support for numpy dtypes, though, so > that, for instance, numpy.float64('-1.#IND') would work as expected. > I'll put it higher on my priority list :-) Great news. Thanks! //Torgil > > > Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit > > (Intel)] on win32>>> import numpy > >>>> numpy.__version__ > > '1.0.3' > >>>> numpy.float("1.0") > > 1.0 > >>>> numpy.nan > > -1.#IND > >>>> numpy.float("-1.#IND") > > Traceback (most recent call last): > > File "", line 1, in > > numpy.float("-1.#IND") > > ValueError: invalid literal for float(): -1.#IND > >>>> > > > > Also, nan and -nan are represented differently for different float to > > string conversion methods. I guess the added zeros are a bug > > somewhere. > >>>> str(nan) > > '-1.#IND' > >>>> "%f" % nan > > '-1.#IND00' > >>>> str(-nan) > > '1.#QNAN' > >>>> "%f" % -nan > > '1.#QNAN0' > > > > This is a problem when floats are stored in text-files that are later > > read to be numerically processed. For now I use the following to > > convert the number. > > > > special_numbers=dict([('-1.#INF',-inf),('1.#INF',inf), > > ('-1.#IND',nan),('-1.#IND00',nan), > > ('1.#QNAN',-nan),('1.#QNAN0',-nan)]) > > def string_to_number(x): > > if x in special_numbers: > > return special_numbers[x] > > return float(x) if ("." in x) or ("e" in x) else int(x) > > > > Is there a simpler way that I missed? > > > > Best Regards, > > > > //Torgil > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > -- > |>|\/|< > /------------------------------------------------------------------\ > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > From robert.kern at gmail.com Thu Jun 21 18:47:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 21 Jun 2007 17:47:50 -0500 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: References: Message-ID: <467B0016.6010509@gmail.com> Torgil Svensson wrote: > Have you specific insights in Python 3k regarding this? I assume 3k > is the next millenium. Maybe they can accept patches before that. Work on Python 3.0 is going on now. An alpha should be out this year. The deadline for proposing major changes requiring a PEP has passed, but this particular issue isn't big enough to be a major concern for 3.0. It has been discussed for Python 2.6, even. Making float types parse/emit standard string representations for NaNs and infs could probably go in if you were to provide an implementation and work out all of the bugs and cross-platform issues. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From python at koepsell.de Fri Jun 22 04:19:07 2007 From: python at koepsell.de (Kilian Koepsell) Date: Fri, 22 Jun 2007 01:19:07 -0700 Subject: [Numpy-discussion] handling of inf Message-ID: <2AE83995-EEC0-4ADC-9249-EAF515F52360@koepsell.de> Hi, I was wondering if the numpy function 'isinf' should return True for complex infinity. I encountered the following behavior that could be considered a bug: >>> import numpy as N >>> N.isinf(1j*N.inf) True >>> 1j/(N.array(1)-1) (nannanj) >>> N.isinf(1j/(N.array(1)-1)) False >>> 1j/(N.array([1])-1) array([ nan +nanj]) >>> N.isinf(1j/(N.array([1])-1)) array([False], dtype=bool) Thanks, Kilian -- Kilian Koepsell Redwood Center for Theoretical Neuroscience Helen Wills Neuroscience Institute, UC Berkeley 132 Barker Hall, #3190, Berkeley, CA 94720-3190 From klemm at phys.ethz.ch Fri Jun 22 09:29:56 2007 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Fri, 22 Jun 2007 15:29:56 +0200 Subject: [Numpy-discussion] effectively computing variograms with numpy Message-ID: Hi, I have an array which represents regularly spaced spatial data. I now would like to compute the (semi-)variogram, i.e. gamma(h) = 1/N(h) \sum_{i,j\in N(h)} (z_i - z_j)**2, where h is the (approximate) spatial difference between the measurements z_i, and z_j, and N(h) is the number of measurements with distance h. However, I only want to calculate the thing along the rows and columns. The naive approach involves two for loops and a lot of searching, which becomes painfully slow on large data sets. Are there better implementations around in numpy/scipy or does anyone have a good idea of how to do that more efficient? I looked around a bit but couldn't find anything. Hanno From tim.hochberg at ieee.org Fri Jun 22 10:51:20 2007 From: tim.hochberg at ieee.org (Timothy Hochberg) Date: Fri, 22 Jun 2007 07:51:20 -0700 Subject: [Numpy-discussion] effectively computing variograms with numpy In-Reply-To: References: Message-ID: On 6/22/07, Hanno Klemm wrote: > > > Hi, > > I have an array which represents regularly spaced spatial data. I now > would like to compute the (semi-)variogram, i.e. > > gamma(h) = 1/N(h) \sum_{i,j\in N(h)} (z_i - z_j)**2, > > where h is the (approximate) spatial difference between the > measurements z_i, and z_j, and N(h) is the number of measurements with > distance h. > > However, I only want to calculate the thing along the rows and > columns. The naive approach involves two for loops and a lot of > searching, which becomes painfully slow on large data sets. Are there > better implementations around in numpy/scipy or does anyone have a > good idea of how to do that more efficient? I looked around a bit but > couldn't find anything. Can you send the naive code as well. Its often easier to see what's going on with code in addition to the equations. Regards. -tim -- . __ . |-\ . . tim.hochberg at ieee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From klemm at phys.ethz.ch Fri Jun 22 11:18:12 2007 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Fri, 22 Jun 2007 17:18:12 +0200 Subject: [Numpy-discussion] effectively computing variograms with numpy In-Reply-To: References: , Message-ID: Tim, this is the best I could come up with until now: import numpy as N def naive_variogram(data, binsize=100., stepsize=5.): """calculates variograms along the rows and columns of the given array which is supposed to contain equally spaced data with stepsize stepsize""" # how many elements do fit in one bin? binlength = int(binsize/stepsize) #bins in x- and y- direction (+1 for the possible #elements larger than int(binsize/stepsize): x_bins = (data.shape[1])/binlength+1 y_bins = (data.shape[0])/binlength+1 #arrays to store the reuslts in x_result = N.zeros(x_bins, dtype = float) y_result = N.zeros(y_bins, dtype = float) #arrays to get teh denominators right x_denominators = N.zeros(x_bins, dtype=float) y_denominators = N.zeros(x_bins, dtype=float) #what is the last index? xlast = data.shape[1] ylast = data.shape[0] for i in range(data.shape[0]): datasquared = (data - data[:,i])**2 #number of bins to fill until the end of the array: numbins = 1 + (xlast - i)/binlength for j in range(numbins): x_result[j]+=\ datasquared[:,i+1+j*binlength:i+1+(j+1)*binlength].sum() x_denominators[j] +=\ datasquared[:,i+1+j*binlength:i+1+(j+1)*binlength].size try: #Is there a rest? x_result[numbins] += datasquared[:,i+1+numbins*binlength:].sum() x_denominators[numbins] += datasquared[:,i+1+numbins*binlength:].size except IndexError: pass x_result /= x_denominators for i in range(data.shape[1]): datasquared = (data - data[i])**2 #number of bins to fill until the end of the array: numbins = 1 + (ylast - i)/binlength #Fill the bins for j in range(numbins): y_result[j]+=datasquared[i+1+j*binlength:i+1+(j+1)*binlength].sum() y_denominators[j] += datasquared[i+1+j*binlength:i+1+(j+1)*binlength].size try: #Is there a rest? y_result[numbins] += datasquared[:,i+1+numbins*binlength:].sum() y_denominators[numbins] += datasquared[:,i+1+numbins*binlength:].size except IndexError: pass y_result /= y_denominators return x_result, y_result Thanks, Hanno Timothy Hochberg said: > ------=_Part_157389_1558912.1182523880067 > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > Content-Transfer-Encoding: 7bit > Content-Disposition: inline > > On 6/22/07, Hanno Klemm wrote: > > > > > > Hi, > > > > I have an array which represents regularly spaced spatial data. I now > > would like to compute the (semi-)variogram, i.e. > > > > gamma(h) = 1/N(h) \sum_{i,j\in N(h)} (z_i - z_j)**2, > > > > where h is the (approximate) spatial difference between the > > measurements z_i, and z_j, and N(h) is the number of measurements with > > distance h. > > > > However, I only want to calculate the thing along the rows and > > columns. The naive approach involves two for loops and a lot of > > searching, which becomes painfully slow on large data sets. Are there > > better implementations around in numpy/scipy or does anyone have a > > good idea of how to do that more efficient? I looked around a bit but > > couldn't find anything. > > > Can you send the naive code as well. Its often easier to see what's going on > with code in addition to the equations. > > Regards. > > -tim > > > > -- > . __ > . |-\ > . > . tim.hochberg at ieee.org > > ------=_Part_157389_1558912.1182523880067 > Content-Type: text/html; charset=ISO-8859-1 > Content-Transfer-Encoding: 7bit > Content-Disposition: inline > >

On 6/22/07, Hanno Klemm <klemm at phys.ethz.ch> wrote:
>
Hi,

I have an array which represents regularly spaced spatial data. I now
would like to compute the (semi-)variogram, i.e.

gamma(h) = 1/N(h) \sum_{i,j\in N(h)} (z_i - z_j)**2,

where h is the (approximate) spatial difference between the >
measurements z_i, and z_j, and N(h) is the number of measurements with
distance h.

However, I only want to calculate the thing along the rows and
columns. The naive approach involves two for loops and a lot of >
searching, which becomes painfully slow on large data sets. Are there
better implementations around in numpy/scipy or does anyone have a
good idea of how to do that more efficient? I looked around a bit but
couldn't find anything. >

Can you send the naive code as well. Its often easier to see what's going on with code in addition to the equations.

Regards.

-tim



--
.  __ >
.   |-\
.
.  tim.hochberg at ieee.org > > ------=_Part_157389_1558912.1182523880067-- > -- Hanno Klemm klemm at phys.ethz.ch From tim.hochberg at ieee.org Fri Jun 22 14:02:57 2007 From: tim.hochberg at ieee.org (Timothy Hochberg) Date: Fri, 22 Jun 2007 11:02:57 -0700 Subject: [Numpy-discussion] effectively computing variograms with numpy In-Reply-To: References: Message-ID: OK, generally in code like this I leave the outer loops alone and try to vectorize just the inner loop.I have some ideas in this direction, but first, there seems to be some problems with the code at well. The code looks like it is written to take non-square 'data' arrays. However, for i in range(data.shape[0]): datasquared = (data - data[:,i])**2 This is looping over shape[0], but indexing on axis-1, which doesn't work for non-square arrays. One suggestion is make a function to compute the variogram along a given axis and then calling it twice instead of computing them both independently. Can you try the following code and see if this correctly implements a variogram? I don't have time to check that it really implements a variogram, but I'm hoping it's close: def variogram(data, binsize, axis=-1): data = data.swapaxes(-1, axis) n = data.shape[-1] resultsize = int(N.ceil(n / float(binsize))) result = N.zeros([resultsize], data.dtype) for i in range(resultsize): j0 = max(i*binsize, 1) j1 = min(j0+binsize, n) denominator = 0 for j in range(j0, j1): d2 = (data[...,j:] - data[...,:-j])**2 result[i] += d2.sum() denominator += N.prod(d2.shape) result[i] /= denominator return result On 6/22/07, Hanno Klemm wrote: > > Tim, > > this is the best I could come up with until now: > > > import numpy as N > > def naive_variogram(data, binsize=100., stepsize=5.): > """calculates variograms along the rows and columns of the given > array which is supposed to contain equally spaced data with > stepsize stepsize""" > > # how many elements do fit in one bin? > > binlength = int(binsize/stepsize) > > #bins in x- and y- direction (+1 for the possible > #elements larger than int(binsize/stepsize): > x_bins = (data.shape[1])/binlength+1 > y_bins = (data.shape[0])/binlength+1 > > #arrays to store the reuslts in > x_result = N.zeros(x_bins, dtype = float) > y_result = N.zeros(y_bins, dtype = float) > > #arrays to get teh denominators right > x_denominators = N.zeros(x_bins, dtype=float) > y_denominators = N.zeros(x_bins, dtype=float) > > #what is the last index? > xlast = data.shape[1] > ylast = data.shape[0] > for i in range(data.shape[0]): > datasquared = (data - data[:,i])**2 > #number of bins to fill until the end of the array: > numbins = 1 + (xlast - i)/binlength > for j in range(numbins): > x_result[j]+=\ > datasquared[:,i+1+j*binlength:i+1+(j+1)*binlength].sum() > x_denominators[j] +=\ > datasquared[:,i+1+j*binlength:i+1+(j+1)*binlength].size > try: > #Is there a rest? > x_result[numbins] += > datasquared[:,i+1+numbins*binlength:].sum() > x_denominators[numbins] += > datasquared[:,i+1+numbins*binlength:].size > except IndexError: > pass > > x_result /= x_denominators > > > for i in range(data.shape[1]): > datasquared = (data - data[i])**2 > #number of bins to fill until the end of the array: > numbins = 1 + (ylast - i)/binlength > #Fill the bins > for j in range(numbins): > > y_result[j]+=datasquared[i+1+j*binlength:i+1+(j+1)*binlength].sum() > y_denominators[j] += > datasquared[i+1+j*binlength:i+1+(j+1)*binlength].size > try: > #Is there a rest? > y_result[numbins] += > datasquared[:,i+1+numbins*binlength:].sum() > y_denominators[numbins] += > datasquared[:,i+1+numbins*binlength:].size > except IndexError: > pass > > y_result /= y_denominators > > return x_result, y_result > > Thanks, > Hanno > > > Timothy Hochberg < tim.hochberg at ieee.org> said: > > > ------=_Part_157389_1558912.1182523880067 > > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > Content-Transfer-Encoding: 7bit > > Content-Disposition: inline > > > > On 6/22/07, Hanno Klemm wrote: > > > > > > > > > Hi, > > > > > > I have an array which represents regularly spaced spatial data. I now > > > would like to compute the (semi-)variogram, i.e. > > > > > > gamma(h) = 1/N(h) \sum_{i,j\in N(h)} (z_i - z_j)**2, > > > > > > where h is the (approximate) spatial difference between the > > > measurements z_i, and z_j, and N(h) is the number of measurements with > > > distance h. > > > > > > However, I only want to calculate the thing along the rows and > > > columns. The naive approach involves two for loops and a lot of > > > searching, which becomes painfully slow on large data sets. Are there > > > better implementations around in numpy/scipy or does anyone have a > > > good idea of how to do that more efficient? I looked around a bit but > > > couldn't find anything. > > > > > > Can you send the naive code as well. Its often easier to see what's > going on > > with code in addition to the equations. > > > > Regards. > > > > -tim > > > > > > > > -- > > . __ > > . |-\ > > . > > . tim.hochberg at ieee.org > > > > ------=_Part_157389_1558912.1182523880067 > > Content-Type: text/html; charset=ISO-8859-1 > > Content-Transfer-Encoding: 7bit > > Content-Disposition: inline > > > >

On 6/22/07, class="gmail_sendername">Hanno Klemm < href="mailto:klemm at phys.ethz.ch ">klemm at phys.ethz.ch > > wrote:
> >
Hi,

I have an array which represents regularly spaced > spatial data. I now
would like to compute the (semi-)variogram, > i.e.

gamma(h) = 1/N(h) \sum_{i,j\in N(h)} (z_i - > z_j)**2,

where h is the (approximate) spatial difference > between the > >
measurements z_i, and z_j, and N(h) is the number of > measurements with
distance h.

However, I only want to > calculate the thing along the rows and
columns. The naive approach > involves two for loops and a lot of > >
searching, which becomes painfully slow on large data sets. Are > there
better implementations around in numpy/scipy or does anyone > have a
good idea of how to do that more efficient? I looked around > a bit but
couldn't find anything. > >

Can you send the naive code as well. Its often > easier to see what's going on with code in addition to the > equations.

Regards.

-tim


clear="all">
--
.  __ > >
.   |-\
.
.   href="mailto:tim.hochberg at ieee.org"> tim.hochberg at ieee.org > > > > ------=_Part_157389_1558912.1182523880067-- > > > > > > -- > Hanno Klemm > klemm at phys.ethz.ch > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- . __ . |-\ . . tim.hochberg at ieee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From lawtrevor at gmail.com Fri Jun 22 19:55:33 2007 From: lawtrevor at gmail.com (Trevor Law) Date: Fri, 22 Jun 2007 16:55:33 -0700 Subject: [Numpy-discussion] alter_code1 and "from Numeric import *" Message-ID: <61e67ee50706221655r9fbfe2bi3916fa6caefbcd0e@mail.gmail.com> Hello, I have just tried using alter_code1 on a code tree, and it appears to ignore "from Numeric import *" statements (does not change them to "from numpy.oldnumeric import *"). I looked at the alter_code1.py and it says that the script should warn about importing *. Although the script may not be behaving absolutely correctly, I am more concerned as to whether or not using "from numpy.oldnumeric import *" is likely to result in undetected (non-syntactic) errors. Any feedback would be welcome. Thank you for your time, Trevor Law -------------- next part -------------- An HTML attachment was scrubbed... URL: From basilisk96 at gmail.com Sat Jun 23 01:08:58 2007 From: basilisk96 at gmail.com (Andriy Basilisk) Date: Sat, 23 Jun 2007 01:08:58 -0400 Subject: [Numpy-discussion] How to create a boolean sub-array from a larger string array? Message-ID: Hello all, My challenge is this: I'm working on an application that parses numerical data from a text report using regular expressions, and then places the results in Numpy matrices for processing. The data contains integers, floats, and boolean values. The boolean values are represented in the text file by either an empty string '', or by a star '*'. The regex parser creates a sequence of nested lists that is readily converted to a MxN string-type matrix. Then, the necessary rows of that matrix are sliced to create the necessary new sub-matrices. Here is a simplified sample of my solution so far: import numpy as _N data = [['1', '5.30', '', '3.44', '*'], ['2', '-4.12', '*', '-1.24', ''], ['3', '0.45', '', '3.22', '*']] mdat = _N.mat(data).T # mdat.shape is now (5,3) ids = mdat[0,].astype(_N.int) #this works for str->int noms = mdat[(1,3),].astype(_N.float64) #same idea also works for str->float64 ## The following technique would be nice, but ## it causes a ValueError: invalid literal for int() with base 10: '' outs = mdat[(2,4),].astype(_N.bool) ## Instead, I have to convert the strings to '0' or '1' ## explicitly, then cast them to a bool matrix: for i, b in enumerate(mdat[(2,4),].T): mdat[2, i] = 1 if mdat[2, i] else 0 mdat[4, i] = 1 if mdat[4, i] else 0 outs = mdat[(2,4),].astype(_N.bool) I was expecting the above to behave similar to the Python bool() function on strings: >>> bool(''), bool('*') (False, True) but it doesn't work that way. Can anyone enlighten me as to why slices of my string matrix cannot be cast to boolean matrices? I'd rather not have to resort to the 'for' loop if there is a smarter way to do this. If an intermediate numpy.array is required instead of numpy.matrix as I have shown here, it's acceptable. I am using the matrix class in this case because the application thrives on it. I'm using Python 2.5 and NumPy 1.0.1 on WinXP. Any help and useful comments will be appreciated, -Basilisk96 From robert.kern at gmail.com Sat Jun 23 02:14:23 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 23 Jun 2007 01:14:23 -0500 Subject: [Numpy-discussion] How to create a boolean sub-array from a larger string array? In-Reply-To: References: Message-ID: <467CBA3F.1040409@gmail.com> Andriy Basilisk wrote: > Hello all, > > My challenge is this: > I'm working on an application that parses numerical data from a text > report using regular expressions, and then places the results in Numpy > matrices for processing. The data contains integers, floats, and > boolean values. The boolean values are represented in the text file > by either an empty string '', or by a star '*'. The regex parser > creates a sequence of nested lists that is readily converted to a MxN > string-type matrix. Then, the necessary rows of that matrix are > sliced to create the necessary new sub-matrices. > > Here is a simplified sample of my solution so far: > > import numpy as _N > data = [['1', '5.30', '', '3.44', '*'], ['2', '-4.12', '*', '-1.24', > ''], ['3', '0.45', '', '3.22', '*']] > mdat = _N.mat(data).T # mdat.shape is now (5,3) > ids = mdat[0,].astype(_N.int) #this works for str->int > noms = mdat[(1,3),].astype(_N.float64) #same idea also works for > str->float64 > ## The following technique would be nice, but > ## it causes a ValueError: invalid literal for int() with base 10: '' > outs = mdat[(2,4),].astype(_N.bool) > ## Instead, I have to convert the strings to '0' or '1' > ## explicitly, then cast them to a bool matrix: > for i, b in enumerate(mdat[(2,4),].T): > mdat[2, i] = 1 if mdat[2, i] else 0 > mdat[4, i] = 1 if mdat[4, i] else 0 > outs = mdat[(2,4),].astype(_N.bool) > > I was expecting the above to behave similar to the Python bool() > function on strings: > >>> bool(''), bool('*') > (False, True) > but it doesn't work that way. > > Can anyone enlighten me as to why slices of my string matrix cannot be > cast to boolean matrices? It's kind of a toss-up as to what's needed in general. I suspect that for the majority of cases, one deals with strings of '0' and '1' instead of empty strings and non-empty strings. You can always use something like mdat[[2,4]] == '*' to get the boolean array you want. This scheme can work with any string representation of True and False. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From basilisk96 at gmail.com Sat Jun 23 15:21:56 2007 From: basilisk96 at gmail.com (Andriy Basilisk) Date: Sat, 23 Jun 2007 15:21:56 -0400 Subject: [Numpy-discussion] How to create a boolean sub-array from a larger string array? Message-ID: > You can always use something like > > mdat[[2,4]] == '*' > > to get the boolean array you want. This scheme can work with any string > representation of True and False. > Dang! I keep forgetting that a conditional test can return an array like that - thanks for the reminder, this will clean up my ugly code from before :) Back to the drawing board, -Basilisk96 From ollinger at wisc.edu Sat Jun 23 15:35:35 2007 From: ollinger at wisc.edu (John Ollinger) Date: Sat, 23 Jun 2007 19:35:35 +0000 (UTC) Subject: [Numpy-discussion] Unhandled floating point exception running test in numpy-1.0.3 and svn 3875 Message-ID: I have just been updating our version of Python, numpy and scipy and have run into a floating point exception that crashes Python when I test the release. I am running gcc 3.3.1 on SuSe Linux 2.4.21-144-smp4G. The error first occurred with numpy-1.0.3. I downloaded svn 3875 when I then read the scipy web page and installed the latest subversion. The test command I am using is python -c 'import numpy; numpy.test(level=1,verbosity==2)' and occurs during the matvec test. This test uses rand to generate 10x8 and 8x1 matrices. The test that multiplies the first matrix by its transpose works; the second test that multiplies the first matrix times the second causes the floating point exception. The code does not crash if the matrices are created with "zero()" but crashes any random matrix generated by rand. It works fine with all of my applications (ones that only multiply 4x4 and 4x1 matrices. Does anyone know if this is a known problem? I can't find any mention of it on the web. It is cold comfort to know that my applications will fail catastrophically rather than by being inaccurate. Thanks. John From stefan at sun.ac.za Sat Jun 23 16:54:50 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 23 Jun 2007 22:54:50 +0200 Subject: [Numpy-discussion] Unhandled floating point exception running test in numpy-1.0.3 and svn 3875 In-Reply-To: References: Message-ID: <20070623205450.GB2578@mentat.za.net> Hi John On Sat, Jun 23, 2007 at 07:35:35PM +0000, John Ollinger wrote: > > I have just been updating our version of Python, numpy and scipy and have run > into a floating point exception that crashes Python when I test the release. > > I am running gcc 3.3.1 on SuSe Linux 2.4.21-144-smp4G. The error first occurred > with numpy-1.0.3. I downloaded svn 3875 when I then read the scipy web page and > installed the latest subversion. The test command I am using is > > python -c 'import numpy; numpy.test(level=1,verbosity==2)' > > and occurs during the matvec test. This test uses rand to generate > 10x8 and 8x1 It may be worth checking whether the new version of numpy is picked up. You can do that using import numpy as N print N.__version__ We have a build slave with a very similar setup to yours (see http://buildbot.scipy.org) and everything seems to be fine. Regards St?fan From rex at nosyntax.com Sat Jun 23 19:06:51 2007 From: rex at nosyntax.com (rex) Date: Sat, 23 Jun 2007 16:06:51 -0700 Subject: [Numpy-discussion] Unhandled floating point exception running test in numpy-1.0.3 and svn 3875 In-Reply-To: <20070623205450.GB2578@mentat.za.net> References: <20070623205450.GB2578@mentat.za.net> Message-ID: <20070623230650.GU4853@x2.nosyntax.com> Stefan van der Walt [2007-06-23 15:06]: > > On Sat, Jun 23, 2007 at 07:35:35PM +0000, John Ollinger wrote: > > > > I have just been updating our version of Python, numpy and scipy and have run > > into a floating point exception that crashes Python when I test the release. > > > > I am running gcc 3.3.1 on SuSe Linux 2.4.21-144-smp4G. The error first occurred > > with numpy-1.0.3. I downloaded svn 3875 when I then read the scipy web page and > > installed the latest subversion. The test command I am using is > > > > python -c 'import numpy; numpy.test(level=1,verbosity==2)' > > > > and occurs during the matvec test. This test uses rand to generate > > 10x8 and 8x1 > > It may be worth checking whether the new version of numpy is picked > up. You can do that using > > import numpy as N > print N.__version__ > > We have a build slave with a very similar setup to yours (see > http://buildbot.scipy.org) and everything seems to be fine. It's somewhat different: SUSE 10.2 Core 2 Duo 32-bit Kernel 2.6.18.2-34-default gcc version 4.1.2 20061115 (prerelease) (SUSE Linux) Python 2.5 (r25:51908, Nov 27 2006 print N.__version__ 1.0.4.dev3868 python -c 'import numpy; numpy.test(level=1,verbosity=2)' [...] Ran 590 tests in 0.473s OK -rex From charlesr.harris at gmail.com Sun Jun 24 03:54:51 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 24 Jun 2007 01:54:51 -0600 Subject: [Numpy-discussion] Unhandled floating point exception running test in numpy-1.0.3 and svn 3875 In-Reply-To: <20070623230650.GU4853@x2.nosyntax.com> References: <20070623205450.GB2578@mentat.za.net> <20070623230650.GU4853@x2.nosyntax.com> Message-ID: On 6/23/07, rex wrote: > > Stefan van der Walt [2007-06-23 15:06]: > > > > On Sat, Jun 23, 2007 at 07:35:35PM +0000, John Ollinger wrote: > > > > > > I have just been updating our version of Python, numpy and scipy and > have run > > > into a floating point exception that crashes Python when I test the > release. > > > What do you mean by crash? Is anything printed? Do older versions of numpy still work? > > I am running gcc 3.3.1 on SuSe Linux 2.4.21-144-smp4G. The error first > occurred > > > with numpy-1.0.3. I downloaded svn 3875 when I then read the scipy > web page and > > > installed the latest subversion. The test command I am using is > > > > > > python -c 'import numpy; numpy.test(level=1,verbosity==2)' > > > > > > and occurs during the matvec test. This test uses rand to generate > > > 10x8 and 8x1 > > > > It may be worth checking whether the new version of numpy is picked > > up. You can do that using > > > > import numpy as N > > print N.__version__ > > > > We have a build slave with a very similar setup to yours (see > > http://buildbot.scipy.org) and everything seems to be fine. > > It's somewhat different: > SUSE 10.2 > Core 2 Duo 32-bit > Kernel 2.6.18.2-34-default > gcc version 4.1.2 20061115 (prerelease) (SUSE Linux) > Python 2.5 (r25:51908, Nov 27 2006 > print N.__version__ > 1.0.4.dev3868 > python -c 'import numpy; numpy.test(level=1,verbosity=2)' > [...] > Ran 590 tests in 0.473s Do you use Atlas? If so, did you compile it yourself or did you use a package? There is a bug in some older 64 bit Atlas packages running on newer intel hardware that generates illegal instruction exceptions and I am wondering if you may have found a new 32 bit bug. One way to check this is to multiply two big matrices together. There are many paths through Atlas, so the known bug is not encountered in all matrix multiplications, and perhaps not for all floating values either. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sun Jun 24 07:05:46 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 24 Jun 2007 20:05:46 +0900 Subject: [Numpy-discussion] [ANN]New numpy, scipy and atlas rpms for FC 5, 6 and 7 and openSUSE (with 64 bits arch support) Message-ID: <467E500A.20800@ar.media.kyoto-u.ac.jp> Hi there, After quite some pain, I finally managed to build a LAPACK + ATLAS rpm useful for numpy and scipy. Read the following if you use Fedora Core or OpenSuse and are tired to install unsuccessfully numpy, scipy, BLAS, LAPACK or ATLAS. Instructions are given there: http://www.scipy.org/Installing_SciPy/Linux (ashigabou repository) Basically: - Fedora Core 5, 6 and 7 and openSUSE 10.2 are supported (x86, and x86_64 for FC 7 and openSuse). - binary rpms for numpy, scipy and blas/lapack dependencies. - source rpm for atlas, for a really easy, 3 commands build of ATLAS (should work for both x86 and x86_64). numpy and scipy are the last releases, including some backported changes to make it work on 64 bits. Atlas is the last developement version, with a trivial patch to build shared blas and lapack which can be used as drop in replacements for netlib blas and lapack. I would like to hear people complains. If people want other distributions supported by the opensuse build system (such as mandriva), I would like to hear it too. cheers, David From rex at nosyntax.com Sun Jun 24 11:55:27 2007 From: rex at nosyntax.com (rex) Date: Sun, 24 Jun 2007 08:55:27 -0700 Subject: [Numpy-discussion] Unhandled floating point exception running test in numpy-1.0.3 and svn 3875 In-Reply-To: References: <20070623205450.GB2578@mentat.za.net> <20070623230650.GU4853@x2.nosyntax.com> Message-ID: <20070624155527.GC4585@x2.nosyntax.com> Charles R Harris [2007-06-24 06:22]: > > > On 6/23/07, rex wrote: > > Stefan van der Walt [2007-06-23 15:06]: >> >> On Sat, Jun 23, 2007 at 07:35:35PM +0000, John Ollinger wrote: >>> I have just been updating our version of Python, numpy and >>> scipy and have run into a floating point exception that crashes >>> Python when I test the release. > > What do you mean by crash? Is anything printed? Do older versions of numpy > still work? John needs to respond to this. >>> I am running gcc 3.3.1 on SuSe Linux 2.4.21-144-smp4G. The error >>> first occurred with numpy-1.0.3. I downloaded svn 3875 when I then >>> read the scipy web page and installed the latest subversion. The >>> test command I am using is >>> python -c 'import numpy; >>> numpy.test(level=1,verbosity==2)' >>> and occurs during the matvec >>> test. This test uses rand to generate 10x8 and 8x1 >> >> It may be worth checking whether the new version of numpy is picked >> up. You can do that using >> >> import numpy as N > > print N.__version__ > > > > We have a build slave with a very similar setup to yours (see > > http://buildbot.scipy.org ) and everything seems to be fine. > > It's somewhat different: > SUSE 10.2 > Core 2 Duo 32-bit > Kernel 2.6.18.2-34-default > gcc version 4.1.2 20061115 (prerelease) (SUSE Linux) > Python 2.5 (r25:51908, Nov 27 2006 > print N.__version__ > 1.0.4.dev3868 > python -c 'import numpy; numpy.test(level=1,verbosity=2)' > [...] > Ran 590 tests in 0.473s > > > Do you use Atlas? If so, did you compile it yourself or did you use a package? > There is a bug in some older 64 bit Atlas packages running on newer intel > hardware that generates illegal instruction exceptions and I am wondering if > you may have found a new 32 bit bug. One way to check this is to multiply two > big matrices together. There are many paths through Atlas, so the known bug is > not encountered in all matrix multiplications, and perhaps not for all floating > values either. The above system is running the http://buildbot.scipy.org/ buildbot with no errors. It doesn't appear to build ATLAS. It's John's older system (gcc 3.3.1 2.4.21-144-smp4G) -- AFAIK, 2.4.21 was used in SUSE 9.0. Current version is 10.2 -- that is throwing an error, not mine. -rex From charlesr.harris at gmail.com Sun Jun 24 12:09:04 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 24 Jun 2007 10:09:04 -0600 Subject: [Numpy-discussion] Unhandled floating point exception running test in numpy-1.0.3 and svn 3875 In-Reply-To: <20070624155527.GC4585@x2.nosyntax.com> References: <20070623205450.GB2578@mentat.za.net> <20070623230650.GU4853@x2.nosyntax.com> <20070624155527.GC4585@x2.nosyntax.com> Message-ID: On 6/24/07, rex wrote: > > Charles R Harris [2007-06-24 06:22]: > > > > > > On 6/23/07, rex wrote: > > > > Stefan van der Walt [2007-06-23 15:06]: > >> > >> On Sat, Jun 23, 2007 at 07:35:35PM +0000, John Ollinger wrote: > >>> I have just been updating our version of Python, numpy and > >>> scipy and have run into a floating point exception that crashes > >>> Python when I test the release. > > > > What do you mean by crash? Is anything printed? Do older versions of > numpy > > still work? > > John needs to respond to this. > > >>> I am running gcc 3.3.1 on SuSe Linux 2.4.21-144-smp4G. The error > >>> first occurred with numpy-1.0.3. I downloaded svn 3875 when I then > >>> read the scipy web page and installed the latest subversion. The > >>> test command I am using is >>> python -c 'import numpy; > >>> numpy.test(level=1,verbosity==2)' >>> and occurs during the matvec > >>> test. This test uses rand to generate 10x8 and 8x1 > >> > >> It may be worth checking whether the new version of numpy is picked > >> up. You can do that using > >> > >> import numpy as N > > > print N.__version__ > > > > > > We have a build slave with a very similar setup to yours (see > > > http://buildbot.scipy.org ) and everything seems to be fine. > > > > It's somewhat different: > > SUSE 10.2 > > Core 2 Duo 32-bit > > Kernel 2.6.18.2-34-default > > gcc version 4.1.2 20061115 (prerelease) (SUSE Linux) > > Python 2.5 (r25:51908, Nov 27 2006 > > print N.__version__ > > 1.0.4.dev3868 > > python -c 'import numpy; numpy.test(level=1,verbosity=2)' > > [...] > > Ran 590 tests in 0.473s > > > > > > Do you use Atlas? If so, did you compile it yourself or did you use a > package? > > There is a bug in some older 64 bit Atlas packages running on newer > intel > > hardware that generates illegal instruction exceptions and I am > wondering if > > you may have found a new 32 bit bug. One way to check this is to > multiply two > > big matrices together. There are many paths through Atlas, so the known > bug is > > not encountered in all matrix multiplications, and perhaps not for all > floating > > values either. > > The above system is running the http://buildbot.scipy.org/ buildbot with > no errors. It doesn't appear to build ATLAS. It's John's older system > (gcc 3.3.1 2.4.21-144-smp4G) -- AFAIK, 2.4.21 was used in SUSE > 9.0. Current version is 10.2 -- that is throwing an error, not mine. Sorry, my mistake. And I misread 2.4.21 as 2.6.21, a very recent kernel. Just shows how fast the years pass by. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From 1960_j at operamail.com Sun Jun 24 12:58:33 2007 From: 1960_j at operamail.com (John Pruce) Date: Sun, 24 Jun 2007 17:58:33 +0100 Subject: [Numpy-discussion] Help installing numpy 1.0.2 on LINUX Message-ID: <20070624165833.E620524789@ws5-3.us4.outblaze.com> I am trying to install nunpy 1.0.2 on aMandrake Linux system kernel 2.6.17mdk. I have followed the Install instructions in numpy. I am including a small portion of the python setup.py install command: Running from numpy source directory. F2PY Version 2_3649 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib/atlas NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib/atlas NOT AVAILABLE atlas_blas_info: FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c Could not locate executable g77 Could not locate executable f95 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o _configtest ATLAS version 3.6.0 built by root on Fri Jun 15 18:59:04 EDT 2007: UNAME : Linux localhost 2.6.17-5mdv #1 SMP Wed Sep 13 14:32:31 EDT 2006 i686 Celeron (Mendocino) GNU/Linux INSTFLG : MMDEF : ARCHDEF : F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 131072 F77 : /usr/local//gcc2.95.3/bin/g77, version GNU Fortran 0.5.25 20010315 (release) F77FLAGS : -fomit-frame-pointer -O CC : /usr/local/gcc2.95.3/bin/gcc, version 2.95.3 CC FLAGS : -fomit-frame-pointer -O3 -funroll-all-loops MCC : /usr/local/gcc2.95.3/bin/gcc, version 2.95.3 MCCFLAGS : -fomit-frame-pointer -O success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in /usr/local/lib/atlas NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib/atlas libraries lapack_atlas not found in /usr/local/lib/atlas numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: libraries lapack_atlas not found in /usr/local/lib/atlas numpy.distutils.system_info.atlas_info FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = f77 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/local/lib/atlas -llapack -lf77blas -lcblas -latlas -o _configtest ATLAS version 3.6.0 built by root on Fri Jun 15 18:59:04 EDT 2007: UNAME : Linux localhost 2.6.17-5mdv #1 SMP Wed Sep 13 14:32:31 EDT 2006 i686 Celeron (Mendocino) GNU/Linux INSTFLG : MMDEF : ARCHDEF : F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 131072 F77 : /usr/local//gcc2.95.3/bin/g77, version GNU Fortran 0.5.25 20010315 (release) F77FLAGS : -fomit-frame-pointer -O CC : /usr/local/gcc2.95.3/bin/gcc, version 2.95.3 CC FLAGS : -fomit-frame-pointer -O3 -funroll-all-loops MCC : /usr/local/gcc2.95.3/bin/gcc, version 2.95.3 MCCFLAGS : -fomit-frame-pointer -O success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] running install running build running config_fc running build_src building py_modules sources creating build/src.linux-i686-2.5 creating build/src.linux-i686-2.5/numpy creating build/src.linux-i686-2.5/numpy/distutils building extension "numpy.core.multiarray" sources creating build/src.linux-i686-2.5/numpy/core Generating build/src.linux-i686-2.5/numpy/core/config.h customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC When I try to run numpy.test(level=1) I get: >>> import numpy >>> numpy.test(level=1) Traceback (most recent call last) File "". line 1, in AttributeError: 'module' has no attribute 'test' >>> Thank you for your help or suggestions. John -- _______________________________________________ Surf the Web in a faster, safer and easier way: Download Opera 9 at http://www.opera.com Powered by Outblaze From stefan at sun.ac.za Sun Jun 24 13:48:14 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 24 Jun 2007 19:48:14 +0200 Subject: [Numpy-discussion] Help installing numpy 1.0.2 on LINUX In-Reply-To: <20070624165833.E620524789@ws5-3.us4.outblaze.com> References: <20070624165833.E620524789@ws5-3.us4.outblaze.com> Message-ID: <20070624174813.GE2578@mentat.za.net> On Sun, Jun 24, 2007 at 05:58:33PM +0100, John Pruce wrote: > When I try to run numpy.test(level=1) I get: > > >>> import numpy > >>> numpy.test(level=1) > Traceback (most recent call last) > File "". line 1, in > AttributeError: 'module' has no attribute 'test' > >>> > > Thank you for your help or suggestions. Are you running from the numpy source directory? If so, change out of it and try again. St?fan From 1960_j at operamail.com Sun Jun 24 13:45:58 2007 From: 1960_j at operamail.com (John Pruce) Date: Sun, 24 Jun 2007 18:45:58 +0100 Subject: [Numpy-discussion] Help installing numpy 1.0.2 on LINUX Message-ID: <20070624174558.5556B24789@ws5-3.us4.outblaze.com> > ----- Original Message ----- > From: "Stefan van der Walt" > To: numpy-discussion at scipy.org > Subject: Re: [Numpy-discussion] Help installing numpy 1.0.2 on LINUX > Date: Sun, 24 Jun 2007 19:48:14 +0200 > > > On Sun, Jun 24, 2007 at 05:58:33PM +0100, John Pruce wrote: > > When I try to run numpy.test(level=1) I get: > > > > >>> import numpy > > >>> numpy.test(level=1) > > Traceback (most recent call last) > > File "". line 1, in > > AttributeError: 'module' has no attribute 'test' > > >>> > > > > Thank you for your help or suggestions. > > Are you running from the numpy source directory? If so, change out of > it and try again. > > St?fan > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion Thank you That was the problem John -- _______________________________________________ Surf the Web in a faster, safer and easier way: Download Opera 9 at http://www.opera.com Powered by Outblaze From ollinger at wisc.edu Sun Jun 24 16:05:52 2007 From: ollinger at wisc.edu (John Ollinger) Date: Sun, 24 Jun 2007 15:05:52 -0500 Subject: [Numpy-discussion] Unhandled floating point exception running test in numpy-1.0.3 and svn 3875 In-Reply-To: References: <20070623205450.GB2578@mentat.za.net> <20070623230650.GU4853@x2.nosyntax.com> <20070624155527.GC4585@x2.nosyntax.com> Message-ID: <6.2.1.2.2.20070624150441.22022f20@pop.gmail.com> An HTML attachment was scrubbed... URL: From rex at nosyntax.com Sun Jun 24 17:05:12 2007 From: rex at nosyntax.com (rex) Date: Sun, 24 Jun 2007 14:05:12 -0700 Subject: [Numpy-discussion] Unhandled floating point exception running test in numpy-1.0.3 and svn 3875 In-Reply-To: <6.2.1.2.2.20070624150441.22022f20@pop.gmail.com> References: <20070623205450.GB2578@mentat.za.net> <20070623230650.GU4853@x2.nosyntax.com> <20070624155527.GC4585@x2.nosyntax.com> <6.2.1.2.2.20070624150441.22022f20@pop.gmail.com> Message-ID: <20070624210512.GA4694@x2.nosyntax.com> John Ollinger [2007-06-24 13:13]: > >>> I am running gcc 3.3.1 on SuSe Linux 2.4.21-144-smp4G. The error > >>> first occurred with numpy-1.0.3. I downloaded svn 3875 when I then > >>> read the scipy web page and installed the latest subversion. The > >>> test command I am using is >>> python -c 'import numpy; > >>> numpy.test(level=1,verbosity==2)' >>> and occurs during the matvec > >>> test. This test uses rand to generate 10x8 and 8x1 > > You hit the nail on the head. It is apparently an Atlas problem. I originally > built numpy with optimized Atlas libraries. When I deleted these from the > site.cfg file the problem went away. > > I apologize for not including the error message in my initial post. Here it is > in case anyone cares: > > check_matvec (numpy.core.tests.test_numeric.test_dot)Floating point exception I recently built (using David Cournapeau's garnumpy) numpy and scipy using ATLAS 3.7.33, gfortran, gcc 4.1.2, SUSE 10.2 (x86 kernel 2.6.18.2-34-default), and python 2.5: python Python 2.5 (r25:51908, Nov 27 2006, 19:14:46) [GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> import scipy >>> print numpy.__version__ 1.0.1 >>> print scipy.__version__ 0.5.2 >>> numpy.test(level=1,verbosity=2) [...] Ran 526 tests in 0.432s OK >>> scipy.test(level=1,verbosity=2) [...] Ran 1596 tests in 2.942s OK There doesn't appear to be a problem with recent versions of the software. In particular, ATLAS 3.7.33 does not cause an error. Is there some reason for you to use such old software? (gcc 3.3.1 & kernel 2.4.21)? What platform are you building for? -rex From archsheep at yahoo.com.br Thu Jun 21 15:50:19 2007 From: archsheep at yahoo.com.br (Alex Torquato S. Carneiro) Date: Thu, 21 Jun 2007 12:50:19 -0700 (PDT) Subject: [Numpy-discussion] PCA - Principal Component Analysis Message-ID: <349462.11063.qm@web63413.mail.re1.yahoo.com> I'm doing some projects in Python (system GNU / Linux - Ubuntu 7.0) about image processing. I'm needing a implementation of PCA, prefer to library for apt-get. Thanks. Alex. ____________________________________________________________________________________ Novo Yahoo! Cad?? - Experimente uma nova busca. http://yahoo.com.br/oqueeuganhocomisso -------------- next part -------------- An HTML attachment was scrubbed... URL: From traveller3141 at gmail.com Thu Jun 21 23:49:43 2007 From: traveller3141 at gmail.com (traveller3141) Date: Thu, 21 Jun 2007 21:49:43 -0600 Subject: [Numpy-discussion] qr decomposition with column pivoting/qr decomposition with householder reflections Message-ID: <9b99c00b0706212049q306b764al172df195a7287a99@mail.gmail.com> I'm in the process of trying to convert some Matlab code into Python. There's a statement of the form: [q,r,e] = qr(A) which performs a qr-decomposition of A, but then also returns a 'permutation' matrix. The purpose of this is to ensure that the values along r's diagonal are decreasing. I believe this technique is called "qr decomposition with column pivoting" or (equivalently) "qr decomposition with householder reflections". I have not been able to find an implementation of this within numpy. Does one exist? Or should I come to truly understand this algorithm (prob'ly a good idea regardless) and implement it? Thanks, Steven -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Jun 24 19:27:48 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 24 Jun 2007 17:27:48 -0600 Subject: [Numpy-discussion] qr decomposition with column pivoting/qr decomposition with householder reflections In-Reply-To: <9b99c00b0706212049q306b764al172df195a7287a99@mail.gmail.com> References: <9b99c00b0706212049q306b764al172df195a7287a99@mail.gmail.com> Message-ID: On 6/21/07, traveller3141 wrote: > > I'm in the process of trying to convert some Matlab code into Python. > There's a statement of the form: > > [q,r,e] = qr(A) > > which performs a qr-decomposition of A, but then also returns a > 'permutation' matrix. The purpose of this is to ensure that the values along > r's diagonal are decreasing. I believe this technique is called "qr > decomposition with column pivoting" or (equivalently) "qr decomposition with > householder reflections". There is a qr version in numpy, numpy.linalg.qr, but it only returns the factors q and r. The underlying lapack routines are {dz}geqrf and {dz}orgqr, the latter converting the product of Householder reflections into the orthogonal matrix q. Column pivoting is not used in {dz}geqrf, but it *is* used in {dz}geqpf. The versions with column pivoting are probably more accurate and also allow fixing certain columns to the front of the array, a useful thing in some cases, so I don't know why we chose the first rather than the second. I suspect the decision was made in Numeric long ago and the simplest function was chosen. The column pivoting version isn't in scipy either and it probably should be. If you need it, it shouldn't be hard to add. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhc28 at cornell.edu Sun Jun 24 20:04:52 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Sun, 24 Jun 2007 20:04:52 -0400 Subject: [Numpy-discussion] PCA - Principal Component Analysis In-Reply-To: <349462.11063.qm@web63413.mail.re1.yahoo.com> References: <349462.11063.qm@web63413.mail.re1.yahoo.com> Message-ID: IMO the Modular toolkit for Data Processing (MDP) has a fairly good and straightforward PCA implementation, among other good tools: mdp-toolkit.sourceforge.net/ I have no idea what apt-get is, though, so I don't know if this will be helpful or not! -Rob On 21/06/07, Alex Torquato S. Carneiro wrote: > > I'm doing some projects in Python (system GNU / Linux - Ubuntu 7.0) about > image processing. I'm needing a implementation of PCA, prefer to library for > apt-get. > > Thanks. > Alex. > > > ________________________________ > Novo Yahoo! Cad?? - Experimente uma nova busca. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > From charlesr.harris at gmail.com Sun Jun 24 20:43:42 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 24 Jun 2007 18:43:42 -0600 Subject: [Numpy-discussion] PCA - Principal Component Analysis In-Reply-To: References: <349462.11063.qm@web63413.mail.re1.yahoo.com> Message-ID: On 6/24/07, Rob Clewley wrote: > > IMO the Modular toolkit for Data Processing (MDP) has a fairly good > and straightforward PCA implementation, among other good tools: > mdp-toolkit.sourceforge.net/ > > I have no idea what apt-get is, though, so I don't know if this will > be helpful or not! > > -Rob Apt-get fetches and installs packages for Debian and Debian based Linux distributions like Ubuntu. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Mon Jun 25 02:08:13 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 25 Jun 2007 08:08:13 +0200 Subject: [Numpy-discussion] PCA - Principal Component Analysis In-Reply-To: <349462.11063.qm@web63413.mail.re1.yahoo.com> References: <349462.11063.qm@web63413.mail.re1.yahoo.com> Message-ID: Hi, You have everything you need for PCA in numpy.linalg. Matthieu 2007/6/21, Alex Torquato S. Carneiro : > > I'm doing some projects in Python (system GNU / Linux - Ubuntu 7.0) about > image processing. I'm needing a implementation of PCA, prefer to library for > apt-get. > > Thanks. > Alex. > > > ------------------------------ > Novo Yahoo! Cad?? - Experimente > uma nova busca. > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Mon Jun 25 03:24:12 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 25 Jun 2007 16:24:12 +0900 Subject: [Numpy-discussion] PCA - Principal Component Analysis In-Reply-To: References: <349462.11063.qm@web63413.mail.re1.yahoo.com> Message-ID: Except last I checked numpy.linalg doesn't have an efficient method for retrieving only a few PCA components. So yeh, you can do PCA but it will be *really* slow on most of the types of problems that PCA is usually used for. You need something like an ARPACK wrapper, which I think they have in the scipy.sandbox. --bb On 6/25/07, Matthieu Brucher wrote: > > Hi, > > You have everything you need for PCA in numpy.linalg. > > Matthieu > > 2007/6/21, Alex Torquato S. Carneiro : > > > > I'm doing some projects in Python (system GNU / Linux - Ubuntu 7.0) > > about image processing. I'm needing a implementation of PCA, prefer to > > library for apt-get. > > > > Thanks. > > Alex. > > > > > > ------------------------------ > > Novo Yahoo! Cad?? - > > Experimente uma nova busca. > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giorgio.luciano at chimica.unige.it Mon Jun 25 03:48:46 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Mon, 25 Jun 2007 09:48:46 +0200 Subject: [Numpy-discussion] PCA - Principal Component Analysis In-Reply-To: References: <349462.11063.qm@web63413.mail.re1.yahoo.com> Message-ID: <467F735E.6020208@chimica.unige.it> I've done a version of nipals algorithm in python (i'ts a translation of a matlab routine in brereton made by Riccardo Leardi). far from perfect (my transport i mean) but working (I've tested on my datasets and it runned ok) you can download it at www.chemometrics.it. Hope it will help and if you are interested in co-op drop me a line ;) Giorgio From klemm at phys.ethz.ch Mon Jun 25 04:10:10 2007 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Mon, 25 Jun 2007 10:10:10 +0200 Subject: [Numpy-discussion] effectively computing variograms with numpy In-Reply-To: References: , Message-ID: Tim, Thank you very much, the code does what's it expected to do. Unfortunately the thing is still pretty slow on large data sets. I will probably now look for ways to calculate the variogram from some random samples of my data. Thanks for the observation regarding the square array, that would have bitten me, later. Again, thanks, Hanno Timothy Hochberg said: > ------=_Part_160363_13970610.1182535377481 > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > Content-Transfer-Encoding: 7bit > Content-Disposition: inline > > OK, generally in code like this I leave the outer loops alone and try to > vectorize just the inner loop.I have some ideas in this direction, but > first, there seems to be some problems with the code at well. The code looks > like it is written to take non-square 'data' arrays. However, > > for i in range(data.shape[0]): > datasquared = (data - data[:,i])**2 > > This is looping over shape[0], but indexing on axis-1, which doesn't work > for non-square arrays. One suggestion is make a function to compute the > variogram along a given axis and then calling it twice instead of computing > them both independently. Can you try the following code and see if this > correctly implements a variogram? I don't have time to check that it really > implements a variogram, but I'm hoping it's close: > > def variogram(data, binsize, axis=-1): > data = data.swapaxes(-1, axis) > n = data.shape[-1] > resultsize = int(N.ceil(n / float(binsize))) > result = N.zeros([resultsize], data.dtype) > for i in range(resultsize): > j0 = max(i*binsize, 1) > j1 = min(j0+binsize, n) > denominator = 0 > for j in range(j0, j1): > d2 = (data[...,j:] - data[...,:-j])**2 > result[i] += d2.sum() > denominator += N.prod(d2.shape) > result[i] /= denominator > return result > > > > > On 6/22/07, Hanno Klemm wrote: > > > > Tim, > > > > this is the best I could come up with until now: > > > > > > import numpy as N > > > > def naive_variogram(data, binsize=100., stepsize=5.): > > """calculates variograms along the rows and columns of the given > > array which is supposed to contain equally spaced data with > > stepsize stepsize""" > > > > # how many elements do fit in one bin? > > > > binlength = int(binsize/stepsize) > > > > #bins in x- and y- direction (+1 for the possible > > #elements larger than int(binsize/stepsize): > > x_bins = (data.shape[1])/binlength+1 > > y_bins = (data.shape[0])/binlength+1 > > > > #arrays to store the reuslts in > > x_result = N.zeros(x_bins, dtype = float) > > y_result = N.zeros(y_bins, dtype = float) > > > > #arrays to get teh denominators right > > x_denominators = N.zeros(x_bins, dtype=float) > > y_denominators = N.zeros(x_bins, dtype=float) > > > > #what is the last index? > > xlast = data.shape[1] > > ylast = data.shape[0] > > for i in range(data.shape[0]): > > datasquared = (data - data[:,i])**2 > > #number of bins to fill until the end of the array: > > numbins = 1 + (xlast - i)/binlength > > for j in range(numbins): > > x_result[j]+=\ > > datasquared[:,i+1+j*binlength:i+1+(j+1)*binlength].sum() > > x_denominators[j] +=\ > > datasquared[:,i+1+j*binlength:i+1+(j+1)*binlength].size > > try: > > #Is there a rest? > > x_result[numbins] += > > datasquared[:,i+1+numbins*binlength:].sum() > > x_denominators[numbins] += > > datasquared[:,i+1+numbins*binlength:].size > > except IndexError: > > pass > > > > x_result /= x_denominators > > > > > > for i in range(data.shape[1]): > > datasquared = (data - data[i])**2 > > #number of bins to fill until the end of the array: > > numbins = 1 + (ylast - i)/binlength > > #Fill the bins > > for j in range(numbins): > > > > y_result[j]+=datasquared[i+1+j*binlength:i+1+(j+1)*binlength].sum() > > y_denominators[j] += > > datasquared[i+1+j*binlength:i+1+(j+1)*binlength].size > > try: > > #Is there a rest? > > y_result[numbins] += > > datasquared[:,i+1+numbins*binlength:].sum() > > y_denominators[numbins] += > > datasquared[:,i+1+numbins*binlength:].size > > except IndexError: > > pass > > > > y_result /= y_denominators > > > > return x_result, y_result > > > > Thanks, > > Hanno > > > > > > Timothy Hochberg < tim.hochberg at ieee.org> said: > > > > > ------=_Part_157389_1558912.1182523880067 > > > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > > Content-Transfer-Encoding: 7bit > > > Content-Disposition: inline > > > > > > On 6/22/07, Hanno Klemm wrote: > > > > > > > > > > > > Hi, > > > > > > > > I have an array which represents regularly spaced spatial data. I now > > > > would like to compute the (semi-)variogram, i.e. > > > > > > > > gamma(h) = 1/N(h) \sum_{i,j\in N(h)} (z_i - z_j)**2, > > > > > > > > where h is the (approximate) spatial difference between the > > > > measurements z_i, and z_j, and N(h) is the number of measurements with > > > > distance h. > > > > > > > > However, I only want to calculate the thing along the rows and > > > > columns. The naive approach involves two for loops and a lot of > > > > searching, which becomes painfully slow on large data sets. Are there > > > > better implementations around in numpy/scipy or does anyone have a > > > > good idea of how to do that more efficient? I looked around a bit but > > > > couldn't find anything. > > > > > > > > > Can you send the naive code as well. Its often easier to see what's > > going on > > > with code in addition to the equations. > > > > > > Regards. > > > > > > -tim > > > > > > > > > > > > -- > > > . __ > > > . |-\ > > > . > > > . tim.hochberg at ieee.org > > > > > > ------=_Part_157389_1558912.1182523880067 > > > Content-Type: text/html; charset=ISO-8859-1 > > > Content-Transfer-Encoding: 7bit > > > Content-Disposition: inline > > > > > >

On 6/22/07, > class="gmail_sendername">Hanno Klemm < > href="mailto:klemm at phys.ethz.ch ">klemm at phys.ethz.ch > > > wrote:
> > >
Hi,

I have an array which represents regularly spaced > > spatial data. I now
would like to compute the (semi-)variogram, > > i.e.

gamma(h) = 1/N(h) \sum_{i,j\in N(h)} (z_i - > > z_j)**2,

where h is the (approximate) spatial difference > > between the > > >
measurements z_i, and z_j, and N(h) is the number of > > measurements with
distance h.

However, I only want to > > calculate the thing along the rows and
columns. The naive approach > > involves two for loops and a lot of > > >
searching, which becomes painfully slow on large data sets. Are > > there
better implementations around in numpy/scipy or does anyone > > have a
good idea of how to do that more efficient? I looked around > > a bit but
couldn't find anything. > > >

Can you send the naive code as well. Its often > > easier to see what's going on with code in addition to the > > equations.

Regards.

-tim


> clear="all">
--
.  __ > > >
.   |-\
.
.   > href="mailto:tim.hochberg at ieee.org"> tim.hochberg at ieee.org > > > > > > ------=_Part_157389_1558912.1182523880067-- > > > > > > > > > > > -- > > Hanno Klemm > > klemm at phys.ethz.ch > > > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > > > -- > . __ > . |-\ > . > . tim.hochberg at ieee.org > > ------=_Part_160363_13970610.1182535377481 > Content-Type: text/html; charset=ISO-8859-1 > Content-Transfer-Encoding: 7bit > Content-Disposition: inline > >
OK, generally in code like this I leave the outer loops alone and try to vectorize just the inner loop.I have some ideas in this direction, but first, there seems to be some problems with the code at well. The code looks like it is written to take non-square 'data' arrays. However, >

       for i in range(data.shape[0]):
           datasquared = (data - data[:,i])**2

This is looping over shape[0], but indexing on axis-1, which doesn't work for non-square arrays. One suggestion is make a function to compute the variogram along a given axis and then calling it twice instead of computing them both independently. Can you try the following code and see if this correctly implements a variogram? I don't have time to check that it really implements a variogram, but I'm hoping it's close: >

    def variogram(data, binsize, axis=-1):
        data = data.swapaxes(-1, axis)
        n = data.shape[-1]
        resultsize = int(N.ceil(n / float(binsize)))
        result = N.zeros([resultsize], > data.dtype)
        for i in range(resultsize):
            j0 = max(i*binsize, 1)
            j1 = min(j0+binsize, n)
            denominator = 0
            for j in range(j0, j1):
                d2 = (data[...,j:] - data[...,:-j])**2 >
                result[i] += d2.sum()
                denominator += N.prod(d2.shape)
            result[i] /= denominator
        return result




On 6/22/07, > Hanno Klemm <klemm at phys.ethz.ch > > wrote:
> Tim,

this is the best I could come up with until now:


import numpy as N

def naive_variogram(data, binsize=100., stepsize=5.):
    """calculates variograms along the rows and columns of the given >
    array which is supposed to contain equally spaced data with
    stepsize stepsize"""

    # how many elements do fit in one bin?

    binlength = int(binsize/stepsize)

    #bins in x- and y- direction (+1 for the possible >
    #elements larger than int(binsize/stepsize):
    x_bins = (data.shape[1])/binlength+1
    y_bins = (data.shape[0])/binlength+1

    #arrays to store the reuslts in
    x_result = N.zeros(x_bins, dtype = float) >
    y_result = N.zeros(y_bins, dtype = float)

    #arrays to get teh denominators right
    x_denominators = N.zeros(x_bins, dtype=float)
    y_denominators = N.zeros(x_bins, dtype=float)

    #what is the last index? >
    xlast = data.shape[1]
    ylast = data.shape[0]
    for i in range(data.shape[0]):
        datasquared = (data - data[:,i])**2
        #number of bins to fill until the end of the array:
        numbins = 1 + (xlast - i)/binlength >
        for j in range(numbins):
            x_result[j]+=\
datasquared[:,i+1+j*binlength:i+1+(j+1)*binlength].sum()
            x_denominators[j] +=\
datasquared[:,i+1+j*binlength:i+1+(j+1)*binlength].size >
        try:
            #Is there a rest?
            x_result[numbins] +=
datasquared[:,i+1+numbins*binlength:].sum()
            x_denominators[numbins] +=
datasquared[:,i+1+numbins*binlength:].size >
>         except IndexError:
            pass

    x_result /= x_denominators


    for i in range(data.shape[1]):
        datasquared = (data - data[i])**2
        #number of bins to fill until the end of the array: >
        numbins = 1 + (ylast - i)/binlength
        #Fill the bins
        for j in range(numbins):

y_result[j]+=datasquared[i+1+j*binlength:i+1+(j+1)*binlength].sum()
            y_denominators[j] += >
> datasquared[i+1+j*binlength:i+1+(j+1)*binlength].size
        try:
            #Is there a rest?
            y_result[numbins] +=
datasquared[:,i+1+numbins*binlength:].sum()
            y_denominators[numbins] += >
datasquared[:,i+1+numbins*binlength:].size
        except IndexError:
            pass

    y_result /= y_denominators

    return x_result, y_result

Thanks,
Hanno


Timothy Hochberg < > tim.hochberg at ieee.org> said:

> ------=_Part_157389_1558912.1182523880067
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >
> Content-Transfer-Encoding: 7bit >
> Content-Disposition: inline
>
> On 6/22/07, Hanno Klemm <klemm at phys.ethz.ch> wrote: >
> >
> >
> > Hi,
> >
> > I have an array which represents regularly spaced spatial data. I now >
> > would like to compute the (semi-)variogram, i.e.
> >
> > gamma(h) = 1/N(h) \sum_{i,j\in N(h)} (z_i - z_j)**2,
> >
> > where h is the (approximate) spatial difference between the >
> > measurements z_i, and z_j, and N(h) is the number of measurements with
> > distance h.
> >
> > However, I only want to calculate the thing along the rows and
> > columns. The naive approach involves two for loops and a lot of >
> > searching, which becomes painfully slow on large data sets. Are there
> > better implementations around in numpy/scipy or does anyone have a
> > good idea of how to do that more efficient? I looked around a bit but >
> > couldn't find anything.
>
>
> Can you send the naive code as well. Its often easier to see what's
going on
> with code in addition to the equations.
>
> Regards. >
>
> -tim
>
>
>
> --
>   __
> .   |-\
> .
>   tim.hochberg at ieee.org >
>
> ------=_Part_157389_1558912.1182523880067 >
> Content-Type: text/html; charset=ISO-8859-1
> Content-Transfer-Encoding: 7bit
> Content-Disposition: inline
>
> <br><br><div><span class="gmail_quote">On 6/22/07, <b >
class="gmail_sendername">Hanno Klemm</b> &lt;<a
href="mailto:klemm at phys.ethz.ch > ">klemm at phys.ethz.ch > </a>&gt;
wrote:</span><blockquote class="gmail_quote" style="border-left: 1px
solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
> <br>Hi,<br><br>I have an array which represents regularly spaced >
spatial data. I now<br>would like to compute the (semi-)variogram,
i.e.<br><br>gamma(h) = 1/N(h) \sum_{i,j\in N(h)} (z_i -
z_j)**2,<br><br>where h is the (approximate) spatial difference >
between the
> <br>measurements z_i, and z_j, and N(h) is the number of
measurements with<br>distance h.<br><br>However, I only want to
calculate the thing along the rows and<br>columns. The naive approach >
involves two for loops and a lot of
> <br>searching, which becomes painfully slow on large data sets. Are
there<br>better implementations around in numpy/scipy or does anyone
have a<br>good idea of how to do that more efficient? I looked around >
a bit but<br>couldn&#39;t find anything.
> </blockquote><div><br>Can you send the naive code as well. Its often
easier to see what&#39;s going on with code in addition to the
> > > equations.<br><br>Regards.<br><br>-tim<br><br></div></div><br
clear="all"><br>-- <br>.&nbsp;&nbsp;__
> <br>.&nbsp;&nbsp; |-\<br>.<br>.&nbsp;&nbsp;<a >
href="mailto:tim.hochberg at ieee.org"> > > tim.hochberg at ieee.org</a>
>
> ------=_Part_157389_1558912.1182523880067-- >
>



--
Hanno Klemm
klemm at phys.ethz.ch


_______________________________________________ >
Numpy-discussion mailing list
> Numpy-discussion at scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion >



--
> .  __
.   |-\
.
.  tim.hochberg at ieee.org > > ------=_Part_160363_13970610.1182535377481-- > -- Hanno Klemm klemm at phys.ethz.ch From haase at msg.ucsf.edu Mon Jun 25 04:33:08 2007 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 25 Jun 2007 10:33:08 +0200 Subject: [Numpy-discussion] arr.dtype.byteorder == '=' --- is this "good code" Message-ID: Hi, Suppose I'm on a little-edian system. Could I have a little-endian numpy array arr, where arr.dtype.byteorder would actually be "<" instead of "=" !? There are two kinds of systems: little edian and big endian. But there are three possible byteorder values: "<", ">" and "=" I assume that if arr.dtype.byteorder is "=" then, even on a little endian system the comparison arr.dtype.byteorder == "<" still fails !? Or are the == and != operators overloaded !? Thanks, Sebastian Haase From jl at dmi.dk Mon Jun 25 05:12:01 2007 From: jl at dmi.dk (Jesper Larsen) Date: Mon, 25 Jun 2007 11:12:01 +0200 Subject: [Numpy-discussion] Setting element to masked in a masked array previously containing no masked values Message-ID: <200706251112.02057.jl@dmi.dk> Hi numpy users, I have a masked array. I am looping over the elements of this array and sometimes want to set a value to missing. Normally this can be done by: myarray.mask[i] = True However the mask attribute is not indexable when there are no existing missing values in the array (it is simply False). In this case I therefore get an error message: myarray.mask[i] = True TypeError: object does not support item assignment Is the best way to solve the problem to do something like this: mask = ma.getmaskarray(myarray) for i in range(n): if blahblah: mask[i] = True myarray = ma.array(myarray, copy=False, mask=mask) or is there a more elegant solution? Does anyone by the way have any pointers to documentation of the masked array features of numpy? I know that it is treated in the numarray manual but it seems like there are some important syntax differences that make this manual of little use in that regard. - Jesper From pgmdevlist at gmail.com Mon Jun 25 09:37:15 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 25 Jun 2007 09:37:15 -0400 Subject: [Numpy-discussion] Setting element to masked in a masked array previously containing no masked values In-Reply-To: <200706251112.02057.jl@dmi.dk> References: <200706251112.02057.jl@dmi.dk> Message-ID: <200706250937.15356.pgmdevlist@gmail.com> On Monday 25 June 2007 05:12:01 Jesper Larsen wrote: > Hi numpy users, > > I have a masked array. I am looping over the elements of this array and > sometimes want to set a value to missing. Normally this can be done by: > > myarray.mask[i] = True Mmh. Experience shows that directly accessing the mask can lead to bad surprises. To mask a series of values in an array, the easiest (and recommended method) is myarray[i] = masked where 'i' can be whatever object used for indexing (an integer, a sequence, a slice...). > Does anyone by the way have any pointers to documentation of the masked > array features of numpy? I know that it is treated in the numarray manual > but it seems like there are some important syntax differences that make > this manual of little use in that regard. I can't really point you to any documentation. The differences of syntax should be minimal. We could however start a wiki page. A side issue is the kind of implementation of masked arrays you want. There are currently two, one directly accessible through numpy.core.ma, another available in the sandbox of the scipy svn site, as maskedarray. This latter considers MaskedArray as a subclass of ndarray, which makes it easier to define subclasses. Moreover, it gives access to soft/hard masks, masked records, more stats functions, and thanks to Eric Firing, can be used directly with matplotlib... I'd be quite happy if you could give it a try and send me your feedback. From jl at dmi.dk Mon Jun 25 10:14:21 2007 From: jl at dmi.dk (Jesper Larsen) Date: Mon, 25 Jun 2007 16:14:21 +0200 Subject: [Numpy-discussion] Setting element to masked in a masked array previously containing no masked values In-Reply-To: <200706250937.15356.pgmdevlist@gmail.com> References: <200706251112.02057.jl@dmi.dk> <200706250937.15356.pgmdevlist@gmail.com> Message-ID: <200706251614.21177.jl@dmi.dk> Hi Pierre and others, On Monday 25 June 2007 15:37, Pierre GM wrote: > On Monday 25 June 2007 05:12:01 Jesper Larsen wrote: > > myarray.mask[i] = True > > Mmh. Experience shows that directly accessing the mask can lead to bad > surprises. To mask a series of values in an array, the easiest (and > recommended method) is > > myarray[i] = masked I was not aware that the way to use masked arrays was as you describe. I thought you had to somehow modify the mask (but the method you describe is of course much more elegant). Thanks for answering my very basic question. > A side issue is the kind of implementation of masked arrays you want. There > are currently two, one directly accessible through numpy.core.ma, another > available in the sandbox of the scipy svn site, as maskedarray. This latter > considers MaskedArray as a subclass of ndarray, which makes it easier to > define subclasses. Moreover, it gives access to soft/hard masks, masked > records, more stats functions, and thanks to Eric Firing, can be used > directly with matplotlib... I'd be quite happy if you could give it a try > and send me your feedback. No, no, I don't want that kind of implementation - I just want to figure out how the current implementation works:-) When I get some more experience in using masked arrays and require some of the functionality that you describe I will give it a try. - Jesper From pgmdevlist at gmail.com Mon Jun 25 10:38:14 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 25 Jun 2007 10:38:14 -0400 Subject: [Numpy-discussion] Setting element to masked in a masked array previously containing no masked values In-Reply-To: <200706251614.21177.jl@dmi.dk> References: <200706251112.02057.jl@dmi.dk> <200706250937.15356.pgmdevlist@gmail.com> <200706251614.21177.jl@dmi.dk> Message-ID: <200706251038.14947.pgmdevlist@gmail.com> On Monday 25 June 2007 10:14:21 Jesper Larsen wrote: > Hi Pierre and others, > I was not aware that the way to use masked arrays was as you describe. I > thought you had to somehow modify the mask (but the method you describe is > of course much more elegant). Thanks for answering my very basic question. You're quite welcome. Corollary: to unmask values, use myarray[i] = nomask of course, use with caution: there are cases where unmasking masked values can lead to bad surprise (eg, if the masked values were not defined...) > No, no, I don't want that kind of implementation - I just want to figure > out how the current implementation works:-) When I get some more experience > in using masked arrays and require some of the functionality that you > describe I will give it a try. You know, the 'new' implementation was designed to be as close as the one available in numpy, so that shouldn't be a problem at all. Besides, it fixes some issues, so I can't but advise you to try it when you'll see fit... From tim.hochberg at ieee.org Mon Jun 25 10:59:06 2007 From: tim.hochberg at ieee.org (Timothy Hochberg) Date: Mon, 25 Jun 2007 07:59:06 -0700 Subject: [Numpy-discussion] effectively computing variograms with numpy In-Reply-To: References: Message-ID: On 6/25/07, Hanno Klemm wrote: > > Tim, > > Thank you very much, the code does what's it expected to do. > Unfortunately the thing is still pretty slow on large data sets. > This does seem like the kind of thing that there should be a faster way to compute, particularly since you are binning the results up. One approach would be to just bin the data before doing the computation, however, that loses a lot of accuracy. It does seem like there should be some moment-like approach that would allow you to bin the data before you do the computation, computing the first and second moments, or something similar and then computing the results from the binned moments. I don't know that that would work -- it's just a vague hunch. I don't know if I'll have time to try it out, but I thought I would mention it. I will probably now look for ways to calculate the variogram from some random samples of my data. Thanks for the observation regarding the square array, that would have bitten me, later. Your welcome, I hope this helps some. Regards, -- . __ . |-\ . . tim.hochberg at ieee.org [SNIP] -------------- next part -------------- An HTML attachment was scrubbed... URL: From giorgio at gilestro.tk Mon Jun 25 13:25:04 2007 From: giorgio at gilestro.tk (Giorgio F. Gilestro) Date: Mon, 25 Jun 2007 12:25:04 -0500 Subject: [Numpy-discussion] average of array containing NaN Message-ID: <212906f40706251025p13aae7e2g274497a86b4ca4a7@mail.gmail.com> I find myself in a situation where an array may contain not-Numbers that I set as NaN. Yet, whatever operation I do on that array( average, sum...) will threat the NaN as infinite values rather then ignoring them as I'd like it'd do. Am I missing something? Is this a bug or a feature? :-) From tim.hochberg at ieee.org Mon Jun 25 13:38:13 2007 From: tim.hochberg at ieee.org (Timothy Hochberg) Date: Mon, 25 Jun 2007 10:38:13 -0700 Subject: [Numpy-discussion] average of array containing NaN In-Reply-To: <212906f40706251025p13aae7e2g274497a86b4ca4a7@mail.gmail.com> References: <212906f40706251025p13aae7e2g274497a86b4ca4a7@mail.gmail.com> Message-ID: On 6/25/07, Giorgio F. Gilestro wrote: > > I find myself in a situation where an array may contain not-Numbers > that I set as NaN. > Yet, whatever operation I do on that array( average, sum...) will > threat the NaN as infinite values rather then ignoring them as I'd > like it'd do. Am I missing something? Is this a bug or a feature? :-) Neither. The best behaviour would probably be to throw an exception, but the extra checking that would require might well slow down other stuff. Try looking at the following functions, they should let you do what you want: 'nanargmax', 'nanargmin', 'nanmax', 'nanmin', 'nansum' _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- . __ . |-\ . . tim.hochberg at ieee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From giorgio at gilestro.tk Mon Jun 25 14:15:20 2007 From: giorgio at gilestro.tk (Giorgio F. Gilestro) Date: Mon, 25 Jun 2007 13:15:20 -0500 Subject: [Numpy-discussion] average of array containing NaN In-Reply-To: References: <212906f40706251025p13aae7e2g274497a86b4ca4a7@mail.gmail.com> Message-ID: <212906f40706251115j4c5edc6wf4323f5abb8c334a@mail.gmail.com> Thanks. Actually those I care the most are average and std. Is there a way to know the number of NaN in an array? On 6/25/07, Timothy Hochberg wrote: > > > > On 6/25/07, Giorgio F. Gilestro wrote: > > I find myself in a situation where an array may contain not-Numbers > > that I set as NaN. > > Yet, whatever operation I do on that array( average, sum...) will > > threat the NaN as infinite values rather then ignoring them as I'd > > like it'd do. > > > Am I missing something? Is this a bug or a feature? :-) > > Neither. The best behaviour would probably be to throw an exception, but the > extra checking that would require might well slow down other stuff. > > Try looking at the following functions, they should let you do what you > want: > > 'nanargmax', 'nanargmin', 'nanmax', 'nanmin', 'nansum' > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > > > > -- > . __ > . |-\ > . > . tim.hochberg at ieee.org > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > From robert.kern at gmail.com Mon Jun 25 14:25:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Jun 2007 13:25:50 -0500 Subject: [Numpy-discussion] average of array containing NaN In-Reply-To: <212906f40706251115j4c5edc6wf4323f5abb8c334a@mail.gmail.com> References: <212906f40706251025p13aae7e2g274497a86b4ca4a7@mail.gmail.com> <212906f40706251115j4c5edc6wf4323f5abb8c334a@mail.gmail.com> Message-ID: <468008AE.1020600@gmail.com> Giorgio F. Gilestro wrote: > Thanks. > Actually those I care the most are average and std. > Is there a way to know the number of NaN in an array? isnan(a).sum() -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From strawman at astraw.com Mon Jun 25 14:25:58 2007 From: strawman at astraw.com (Andrew Straw) Date: Mon, 25 Jun 2007 11:25:58 -0700 Subject: [Numpy-discussion] average of array containing NaN In-Reply-To: <212906f40706251025p13aae7e2g274497a86b4ca4a7@mail.gmail.com> References: <212906f40706251025p13aae7e2g274497a86b4ca4a7@mail.gmail.com> Message-ID: <468008B6.5050509@astraw.com> Giorgio F. Gilestro wrote: > I find myself in a situation where an array may contain not-Numbers > that I set as NaN. > Yet, whatever operation I do on that array( average, sum...) will > threat the NaN as infinite values rather then ignoring them as I'd > like it'd do. > > Am I missing something? Is this a bug or a feature? :-) You may be interested in masked arrays. From pgmdevlist at gmail.com Mon Jun 25 14:28:50 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 25 Jun 2007 14:28:50 -0400 Subject: [Numpy-discussion] average of array containing NaN In-Reply-To: <212906f40706251115j4c5edc6wf4323f5abb8c334a@mail.gmail.com> References: <212906f40706251025p13aae7e2g274497a86b4ca4a7@mail.gmail.com> <212906f40706251115j4c5edc6wf4323f5abb8c334a@mail.gmail.com> Message-ID: <200706251428.50764.pgmdevlist@gmail.com> On Monday 25 June 2007 14:15:20 Giorgio F. Gilestro wrote: > Thanks. > Actually those I care the most are average and std. > Is there a way to know the number of NaN in an array? Giorgio, You could use: numpy.isnan(x).sum() But once again masked arrays were designed to handle this kind of situation seamlessly. Just create a masked_array masked_array(x, mask=isnan(x)) and use the regular functions/methods on the masked array. From torgil.svensson at gmail.com Mon Jun 25 14:33:17 2007 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Mon, 25 Jun 2007 20:33:17 +0200 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: <467B0016.6010509@gmail.com> References: <467B0016.6010509@gmail.com> Message-ID: On 6/22/07, Robert Kern wrote: > Making float types parse/emit standard string > representations for NaNs and infs could probably go in if you were to provide an > implementation and work out all of the bugs and cross-platform issues. The float types already emit string-representation of nan's and inf's but doesn't know how to parse them back. This parsing step should be trivial to implement. I cannot see any cross-platform issues with this. If the floats aren't binary compatible across platforms we'll have to face these issues regardless of the string representation (I think they are, except for endianess). If cross-platform issues includes string representation from other sources than python 3.0, things get trickier. I think that python should handle it's own string representation, others could always be handled with sub-classing. At a minimum "float(str(nan))==nan" should evaluate as True. //Torgil From robert.kern at gmail.com Mon Jun 25 14:46:55 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Jun 2007 13:46:55 -0500 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: References: <467B0016.6010509@gmail.com> Message-ID: <46800D9F.6090505@gmail.com> Torgil Svensson wrote: > On 6/22/07, Robert Kern wrote: >> Making float types parse/emit standard string >> representations for NaNs and infs could probably go in if you were to provide an >> implementation and work out all of the bugs and cross-platform issues. > > The float types already emit string-representation of nan's and inf's > but doesn't know how to parse them back. This parsing step should be > trivial to implement. > > I cannot see any cross-platform issues with this. Well, the string representation that is (currently) emitted is not cross-platform, so you will have to add that back to the list. > If the floats aren't > binary compatible across platforms we'll have to face these issues > regardless of the string representation (I think they are, except for > endianess). NaNs and infs are IEEE-754 concepts. Python does run on non-IEEE-754 platforms, and I don't think that python-dev will want to entirely exclude them. You will have to do *something* about those platforms. Possibly, they just won't support NaNs and infs at all, but you'd have to make sure that the bit pattern that is a NaN on IEEE-754 systems won't be misinterpreted as a NaN on the non-IEEE-754 systems. > If cross-platform issues includes string representation from other > sources than python 3.0, things get trickier. I think that python > should handle it's own string representation, others could always be > handled with sub-classing. At a minimum "float(str(nan))==nan" should > evaluate as True. Then go for it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From focke at slac.stanford.edu Mon Jun 25 15:26:31 2007 From: focke at slac.stanford.edu (Warren Focke) Date: Mon, 25 Jun 2007 12:26:31 -0700 (PDT) Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: References: <467B0016.6010509@gmail.com> Message-ID: On Mon, 25 Jun 2007, Torgil Svensson wrote: > handled with sub-classing. At a minimum "float(str(nan))==nan" should > evaluate as True. False. No NaN should ever compare equal to anything, even itself. But if the system is 754-compliant, it won't. "isnan(float(str(nan))) == True" would be nice, though. w From giorgio at gilestro.tk Mon Jun 25 15:36:16 2007 From: giorgio at gilestro.tk (Giorgio F. Gilestro) Date: Mon, 25 Jun 2007 14:36:16 -0500 Subject: [Numpy-discussion] average of array containing NaN In-Reply-To: <200706251428.50764.pgmdevlist@gmail.com> References: <212906f40706251025p13aae7e2g274497a86b4ca4a7@mail.gmail.com> <212906f40706251115j4c5edc6wf4323f5abb8c334a@mail.gmail.com> <200706251428.50764.pgmdevlist@gmail.com> Message-ID: <212906f40706251236h1e23b088ldf063a23d2f59e4d@mail.gmail.com> Masked array seems definitely to be the way to go, thanks a lot. I must say that this entire issue doesn't make much sense to me: my understanding is the a NaN is different from an INF, therefore one would assume that really there is no reason why a not-number should not be ignored by default by all the array manipulating functions. On 6/25/07, Pierre GM wrote: > On Monday 25 June 2007 14:15:20 Giorgio F. Gilestro wrote: > > Thanks. > > Actually those I care the most are average and std. > > Is there a way to know the number of NaN in an array? > > Giorgio, > You could use: > numpy.isnan(x).sum() > > But once again > > masked arrays were designed to handle this kind of situation seamlessly. Just > create a masked_array > masked_array(x, mask=isnan(x)) > and use the regular functions/methods on the masked array. > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From robert.kern at gmail.com Mon Jun 25 15:41:29 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Jun 2007 14:41:29 -0500 Subject: [Numpy-discussion] average of array containing NaN In-Reply-To: <212906f40706251236h1e23b088ldf063a23d2f59e4d@mail.gmail.com> References: <212906f40706251025p13aae7e2g274497a86b4ca4a7@mail.gmail.com> <212906f40706251115j4c5edc6wf4323f5abb8c334a@mail.gmail.com> <200706251428.50764.pgmdevlist@gmail.com> <212906f40706251236h1e23b088ldf063a23d2f59e4d@mail.gmail.com> Message-ID: <46801A69.8090203@gmail.com> Giorgio F. Gilestro wrote: > Masked array seems definitely to be the way to go, thanks a lot. > > I must say that this entire issue doesn't make much sense to me: my > understanding is the a NaN is different from an INF, therefore one > would assume that really there is no reason why a not-number should > not be ignored by default by all the array manipulating functions. Because NaNs might also signal an error. Generally speaking, operations that involve NaNs should return NaNs. Operations that do something else with NaNs, like ignoring them, are special. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Mon Jun 25 15:42:13 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 25 Jun 2007 13:42:13 -0600 Subject: [Numpy-discussion] average of array containing NaN In-Reply-To: <212906f40706251236h1e23b088ldf063a23d2f59e4d@mail.gmail.com> References: <212906f40706251025p13aae7e2g274497a86b4ca4a7@mail.gmail.com> <212906f40706251115j4c5edc6wf4323f5abb8c334a@mail.gmail.com> <200706251428.50764.pgmdevlist@gmail.com> <212906f40706251236h1e23b088ldf063a23d2f59e4d@mail.gmail.com> Message-ID: On 6/25/07, Giorgio F. Gilestro wrote: > > Masked array seems definitely to be the way to go, thanks a lot. > > I must say that this entire issue doesn't make much sense to me: my > understanding is the a NaN is different from an INF, therefore one > would assume that really there is no reason why a not-number should > not be ignored by default by all the array manipulating functions. Strictly speaking, it should be propagated, i.e., the sum and average should be NaNs also. I don't know why that didn't happen. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From giorgio at gilestro.tk Mon Jun 25 15:55:12 2007 From: giorgio at gilestro.tk (Giorgio F. Gilestro) Date: Mon, 25 Jun 2007 14:55:12 -0500 Subject: [Numpy-discussion] average of array containing NaN In-Reply-To: References: <212906f40706251025p13aae7e2g274497a86b4ca4a7@mail.gmail.com> <212906f40706251115j4c5edc6wf4323f5abb8c334a@mail.gmail.com> <200706251428.50764.pgmdevlist@gmail.com> <212906f40706251236h1e23b088ldf063a23d2f59e4d@mail.gmail.com> Message-ID: <212906f40706251255p3a869f33o1a49935a50aa4aba@mail.gmail.com> BTW, I found nanmean and nanstd in scipy.stats.stats will be good for my case too. On 6/25/07, Charles R Harris wrote: > > > On 6/25/07, Giorgio F. Gilestro wrote: > > Masked array seems definitely to be the way to go, thanks a lot. > > > > I must say that this entire issue doesn't make much sense to me: my > > understanding is the a NaN is different from an INF, therefore one > > would assume that really there is no reason why a not-number should > > not be ignored by default by all the array manipulating functions. > > Strictly speaking, it should be propagated, i.e., the sum and average should > be NaNs also. I don't know why that didn't happen. > > Chuck > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > From torgil.svensson at gmail.com Mon Jun 25 17:07:39 2007 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Mon, 25 Jun 2007 23:07:39 +0200 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: <46800D9F.6090505@gmail.com> References: <467B0016.6010509@gmail.com> <46800D9F.6090505@gmail.com> Message-ID: On 6/25/07, Robert Kern wrote: > NaNs and infs are IEEE-754 concepts. Python does run on non-IEEE-754 platforms, > and I don't think that python-dev will want to entirely exclude them. You will > have to do *something* about those platforms. Possibly, they just won't support > NaNs and infs at all, but you'd have to make sure that the bit pattern that is a > NaN on IEEE-754 systems won't be misinterpreted as a NaN on the non-IEEE-754 > systems. Sounds like some clever #ifdefs is needed here. How does isnan() dealing with this? //Torgil From klemm at phys.ethz.ch Mon Jun 25 17:09:02 2007 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Mon, 25 Jun 2007 23:09:02 +0200 Subject: [Numpy-discussion] effectively computing variograms with numpy In-Reply-To: References: Message-ID: <7f5e8254640d81c36c4284a2379a204c@phys.ethz.ch> I will try and dig a bit more in the literature, maybe I find something. Hanno On Jun 25, 2007, at 4:59 PM, Timothy Hochberg wrote: > > > > On 6/25/07, Hanno Klemm < klemm at phys.ethz.ch> wrote: >> Tim, >> >> Thank you very much, the code does what's it expected to do. >> Unfortunately the thing is still pretty slow on large data sets. > This does seem like the kind of thing that there should be a faster > way to compute, particularly since you are binning the results up. One > approach would be to just bin the data before doing the computation, > however, that loses a lot of accuracy. It does seem like there should > be some moment-like approach that would allow you to bin the data > before you do the computation, computing the first and second moments, > or something similar and then computing the results from the binned > moments. I don't know that that would work -- it's just a vague hunch. > I don't know if I'll have time to try it out, but I thought I would > mention it. > ? > > > > > > Your welcome, I hope this helps some. > > Regards, > > -- > .??__ > .?? |-\ > . > .??tim.hochberg at ieee.org > > > [SNIP] > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- Hanno Klemm hanno.klemm at xs4all.nl http://www.mth.kcl.ac.uk/~klemm From torgil.svensson at gmail.com Mon Jun 25 17:09:47 2007 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Mon, 25 Jun 2007 23:09:47 +0200 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: References: <467B0016.6010509@gmail.com> Message-ID: On 6/25/07, Warren Focke wrote: > False. No NaN should ever compare equal to anything, even itself. But if > the system is 754-compliant, it won't. > > "isnan(float(str(nan))) == True" would be nice, though. > Good point. Does this also hold true for the quiet nan's? //Torgil From robert.kern at gmail.com Mon Jun 25 17:10:46 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Jun 2007 16:10:46 -0500 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: References: <467B0016.6010509@gmail.com> <46800D9F.6090505@gmail.com> Message-ID: <46802F56.5040802@gmail.com> Torgil Svensson wrote: > On 6/25/07, Robert Kern wrote: >> NaNs and infs are IEEE-754 concepts. Python does run on non-IEEE-754 platforms, >> and I don't think that python-dev will want to entirely exclude them. You will >> have to do *something* about those platforms. Possibly, they just won't support >> NaNs and infs at all, but you'd have to make sure that the bit pattern that is a >> NaN on IEEE-754 systems won't be misinterpreted as a NaN on the non-IEEE-754 >> systems. > > Sounds like some clever #ifdefs is needed here. How does isnan() > dealing with this? It doesn't. numpy does require IEEE-754. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Jun 25 17:11:17 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Jun 2007 16:11:17 -0500 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: References: <467B0016.6010509@gmail.com> Message-ID: <46802F75.80708@gmail.com> Torgil Svensson wrote: > On 6/25/07, Warren Focke wrote: > >> False. No NaN should ever compare equal to anything, even itself. But if >> the system is 754-compliant, it won't. >> >> "isnan(float(str(nan))) == True" would be nice, though. > > Good point. Does this also hold true for the quiet nan's? Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From torgil.svensson at gmail.com Mon Jun 25 18:39:02 2007 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Tue, 26 Jun 2007 00:39:02 +0200 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: References: <467B0016.6010509@gmail.com> <46800D9F.6090505@gmail.com> Message-ID: On 6/25/07, Torgil Svensson wrote: > On 6/25/07, Robert Kern wrote: > > NaNs and infs are IEEE-754 concepts. Python does run on non-IEEE-754 platforms, > > and I don't think that python-dev will want to entirely exclude them. You will > > have to do *something* about those platforms. Possibly, they just won't support > > NaNs and infs at all, but you'd have to make sure that the bit pattern that is a > > NaN on IEEE-754 systems won't be misinterpreted as a NaN on the non-IEEE-754 > > systems. > > Sounds like some clever #ifdefs is needed here. I took a quick glance at the python code. On the positive side there is ways to detect if a platform is IEEE-754, they error out if special values are unpacked to non IEEE-754 platforms (we can do the same on strings). So far it looks straight forward. The problem is what strings to expect, the string generation uses OS-specific routines (probably the c-library, haven't looked). I think python should be consistent regarding this across platforms but I don't know if different c-libraries generates different strings for special numbers. Anyone? If it's true, this might go political (should string representation follow the system or be unified) and implementation should either be in OS-common code or OS-specific code. //Torgil From pgmdevlist at gmail.com Mon Jun 25 19:27:53 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 25 Jun 2007 19:27:53 -0400 Subject: [Numpy-discussion] Familiar w/ reduce ? Message-ID: <200706251927.55123.pgmdevlist@gmail.com> All, I have a (m_1+1,m_2+1) matrix of 0 and 1, and I need to compute a second matrix U by recurrence. A double-loop as such #............................ U = empty((m_1+1, m_2+1)) U[:,0] = C[:,0] U[0,:] = C[0,:] for i in range(1,m_1+1): for j in range(1,m_2+1): U[i,j] = C[i,j] * (U[i,j-1] + U[i-1,j]) #............................ does the trick, but I suspect there is not a simpler, more elegant solution. Would anybody have any idea ? Thanks a lot in advance, and all my apologies for being a tad lazy... (PS: FYI, it's for a reimplementation of the Kolmogorov-Smirnov test statistic, where ties occur) P. From charlesr.harris at gmail.com Mon Jun 25 20:35:25 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 25 Jun 2007 18:35:25 -0600 Subject: [Numpy-discussion] Unwanted attachments on the SciPy home page. Message-ID: Hi Robert, Some porn has been uploaded to the attachments page at scipy.org Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Jun 25 20:53:32 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 25 Jun 2007 19:53:32 -0500 Subject: [Numpy-discussion] Unwanted attachments on the SciPy home page. In-Reply-To: References: Message-ID: <4680638C.5040104@gmail.com> Charles R Harris wrote: > Hi Robert, > > Some porn has been uploaded to the attachments page at scipy.org > Removed. Thank you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From strawman at astraw.com Mon Jun 25 22:51:39 2007 From: strawman at astraw.com (Andrew Straw) Date: Mon, 25 Jun 2007 19:51:39 -0700 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: References: <467B0016.6010509@gmail.com> <46800D9F.6090505@gmail.com> Message-ID: <46807F3B.4010008@astraw.com> Torgil Svensson wrote: > OS-specific routines (probably the c-library, haven't looked). I > think python should be consistent regarding this across platforms but > I don't know if different c-libraries generates different strings for > special numbers. Anyone? Windows and Linux certainly generate different strings for special numbers from current Python, and I guess the origin is the libc on those platforms. But, as Python is moving away from the libc for file IO in Python 3K, perhaps string representation of floats would be considered, too. (In fact for all I know, perhaps it has already been considered.) Maybe you should email the python-3k-dev list? -Andrew From svetosch at gmx.net Tue Jun 26 04:52:24 2007 From: svetosch at gmx.net (Sven Schreiber) Date: Tue, 26 Jun 2007 09:52:24 +0100 Subject: [Numpy-discussion] Familiar w/ reduce ? In-Reply-To: <200706251927.55123.pgmdevlist@gmail.com> References: <200706251927.55123.pgmdevlist@gmail.com> Message-ID: <4680D3C8.2070104@gmx.net> Pierre GM schrieb: > All, > I have a (m_1+1,m_2+1) matrix of 0 and 1, and I need to compute a second > matrix U by recurrence. > A double-loop as such > #............................ > U = empty((m_1+1, m_2+1)) > U[:,0] = C[:,0] > U[0,:] = C[0,:] > for i in range(1,m_1+1): > for j in range(1,m_2+1): > U[i,j] = C[i,j] * (U[i,j-1] + U[i-1,j]) > #............................ > does the trick, but I suspect there is not a simpler, more elegant solution. > Would anybody have any idea ? > Thanks a lot in advance, and all my apologies for being a tad lazy... > Going from two to one loop at least seems relatively easy; what about something like (untested, especially all the parentheses...): for i in range (1,m_1+1): U[i,1:] = dot(C[i,1:], vstack((U[i,:-1],U[i-1,1:])).sum(axis=0)) Actually, now that I see the structure more clearly, what about tempU1 = U[:-1,1:] # upper right block (w/o last row and first col) tempU2 = U[1:,:-1] # lower left (w/9 first row and last col) U[1:,1:] = dot(C[1:,1:], tempU1+tempU2) # hopefully does it elementwise? > (PS: FYI, it's for a reimplementation of the Kolmogorov-Smirnov test > statistic, where ties occur) Glad that there are other applied statisticians (maybe even econometrician?) around working with plain old 2d matrices... Would you mind posting the resulting code? cheers, sven From dpinte at itae.be Tue Jun 26 05:06:26 2007 From: dpinte at itae.be (Didrik Pinte) Date: Tue, 26 Jun 2007 11:06:26 +0200 Subject: [Numpy-discussion] effectively computing variograms with numpy In-Reply-To: <7f5e8254640d81c36c4284a2379a204c@phys.ethz.ch> References: <7f5e8254640d81c36c4284a2379a204c@phys.ethz.ch> Message-ID: <1182848786.12557.19.camel@ddp.simpson> On Mon, 2007-06-25 at 23:09 +0200, Hanno Klemm wrote: > I will try and dig a bit more in the literature, maybe I find something. > > Hanno I don't know if it can help. We started a project to convert BMELib (a matlab library) into Python. It's still bound with Numeric but it should be pretty straigthforward to convert it to numpy. The translation was not finished but the variogram methods worked pretty fine and the benchmarks between the Python and Matlab version were very interesting. If you want to investigate it, see http://bmelibpy.sourceforge.net (variograms are in the statlib file). Didrik -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part URL: From klemm at phys.ethz.ch Tue Jun 26 07:19:15 2007 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Tue, 26 Jun 2007 13:19:15 +0200 Subject: [Numpy-discussion] effectively computing variograms with numpy In-Reply-To: <1182848786.12557.19.camel@ddp.simpson> References: <7f5e8254640d81c36c4284a2379a204c@phys.ethz.ch>, <7f5e8254640d81c36c4284a2379a204c@phys.ethz.ch> Message-ID: Didrik, thanks, I'll definitely will have a look at this. Hanno Didrik Pinte said: > > --=-aUNlfGW7wc8MzGzdSDGo > Content-Type: text/plain > Content-Transfer-Encoding: quoted-printable > > On Mon, 2007-06-25 at 23:09 +0200, Hanno Klemm wrote: > > I will try and dig a bit more in the literature, maybe I find something. > >=20 > > Hanno > > I don't know if it can help. We started a project to convert BMELib (a > matlab library) into Python. It's still bound with Numeric but it should > be pretty straigthforward to convert it to numpy. The translation was > not finished but the variogram methods worked pretty fine and the > benchmarks between the Python and Matlab version were very interesting. > > If you want to investigate it, see http://bmelibpy.sourceforge.net > (variograms are in the statlib file). > > Didrik > > --=-aUNlfGW7wc8MzGzdSDGo > Content-Type: application/pgp-signature; name=signature.asc > Content-Description: This is a digitally signed message part > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.6 (GNU/Linux) > > iD8DBQBGgNcS9Rlh4Zs4yBMRAh2JAJ4qlmltml+/b/wYo+JohjN0NpzK/ACfWWpL > eealZmRl1Cw6BfTRVBXy1TE= > =xuLt > -----END PGP SIGNATURE----- > > --=-aUNlfGW7wc8MzGzdSDGo-- > > -- Hanno Klemm klemm at phys.ethz.ch From torgil.svensson at gmail.com Tue Jun 26 11:20:51 2007 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Tue, 26 Jun 2007 17:20:51 +0200 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: <46807F3B.4010008@astraw.com> References: <467B0016.6010509@gmail.com> <46800D9F.6090505@gmail.com> <46807F3B.4010008@astraw.com> Message-ID: On 6/26/07, Andrew Straw wrote: > But, as Python is moving away from the libc for file IO in > Python 3K, perhaps string representation of floats would be considered, > too. (In fact for all I know, perhaps it has already been considered.) > Maybe you should email the python-3k-dev list? Good idea. I found this mailing thread on python-dev: http://mail.python.org/pipermail/python-dev/2007-June/073625.html There's also one interesting bug regarding to this. 1732212: repr of 'nan' floats not parseable https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1732212&group_id=5470 This seems to indicate that float('nan') works on some platforms but str(nan) isn't. Is this true on Linux? Could anyone confirm this? What about float('inf') and repr(inf) on Linux? This may also mean that I have an easy long term way out => move to Linux and follow up the resolution of this bug. Windows is all troubles anyway in several areas and may not be worth the extra effort. //Torgil From strawman at astraw.com Tue Jun 26 12:56:01 2007 From: strawman at astraw.com (Andrew Straw) Date: Tue, 26 Jun 2007 09:56:01 -0700 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: References: <467B0016.6010509@gmail.com> <46800D9F.6090505@gmail.com> <46807F3B.4010008@astraw.com> Message-ID: <46814521.7060305@astraw.com> Torgil Svensson wrote: > This seems to indicate that float('nan') works on some platforms but > str(nan) isn't. Is this true on Linux? Could anyone confirm this? What > about float('inf') and repr(inf) on Linux? On Ubuntu Feisty (amd64) Linux (but this behavior has been the same for at least the 6 years I can remember.): $ python Python 2.5.1 (r251:54863, May 2 2007, 16:27:44) [GCC 4.1.2 (Ubuntu 4.1.2-0ubuntu4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> float('nan') nan >>> float('inf') inf >>> import numpy >>> repr(numpy.inf) 'inf' >>> repr(numpy.nan) 'nan' From torgil.svensson at gmail.com Tue Jun 26 14:27:14 2007 From: torgil.svensson at gmail.com (Torgil Svensson) Date: Tue, 26 Jun 2007 20:27:14 +0200 Subject: [Numpy-discussion] annoying numpy string to float conversion behaviour In-Reply-To: <46814521.7060305@astraw.com> References: <467B0016.6010509@gmail.com> <46800D9F.6090505@gmail.com> <46807F3B.4010008@astraw.com> <46814521.7060305@astraw.com> Message-ID: On 6/26/07, Andrew Straw wrote: > Torgil Svensson wrote: > > > This seems to indicate that float('nan') works on some platforms but > > str(nan) isn't. Is this true on Linux? Could anyone confirm this? What > > about float('inf') and repr(inf) on Linux? > > On Ubuntu Feisty (amd64) Linux (but this behavior has been the same for > at least the 6 years I can remember.): > > $ python > Python 2.5.1 (r251:54863, May 2 2007, 16:27:44) > [GCC 4.1.2 (Ubuntu 4.1.2-0ubuntu4)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> float('nan') > nan > >>> float('inf') > inf > >>> import numpy > >>> repr(numpy.inf) > 'inf' > >>> repr(numpy.nan) > 'nan' I should have guessed this and tried it earlier, but the odds on Windows in these cases are too low to give something in return. I think I put this in the bag of annoyances/problems that will go away with Windows and just live with it in the meantime. Thanks for this report and all other good feedback from the list! //Torgil From tom.denniston at alum.dartmouth.org Tue Jun 26 15:32:13 2007 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Tue, 26 Jun 2007 14:32:13 -0500 Subject: [Numpy-discussion] bug in lexsort with two different dtypes? Message-ID: In [1]: intArr1 = numpy.array([ 0, 1, 2,-2,-1, 5,-5,-5]) In [2]: intArr2 = numpy.array([1,1,1,2,2,2,3,4]) In [3]: charArr = numpy.array(['a','a','a','b','b','b','c','d']) Here I sort two int arrays. As expected intArr2 dominates intArr1 but the items with the same intArr2 values are sorted forwards according to intArr1 In [6]: numpy.lexsort((intArr1, intArr2)) Out[6]: array([0, 1, 2, 3, 4, 5, 6, 7]) This, however, looks like a bug to me. Here I sort an int array and a str array. As expected charArray dominates intArr1 but the items with the same charArray values are sorted *backwards* according to intArr1 In [5]: numpy.lexsort((intArr1, charArr)) Out[5]: array([2, 1, 0, 5, 4, 3, 6, 7]) Is this a bug or am I missing something? --Tom From charlesr.harris at gmail.com Tue Jun 26 20:47:53 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 26 Jun 2007 18:47:53 -0600 Subject: [Numpy-discussion] bug in lexsort with two different dtypes? In-Reply-To: References: Message-ID: On 6/26/07, Tom Denniston wrote: > > In [1]: intArr1 = numpy.array([ 0, 1, 2,-2,-1, 5,-5,-5]) > In [2]: intArr2 = numpy.array([1,1,1,2,2,2,3,4]) > In [3]: charArr = numpy.array(['a','a','a','b','b','b','c','d']) > > Here I sort two int arrays. As expected intArr2 dominates intArr1 but > the items with the same intArr2 values are sorted forwards according > to intArr1 > In [6]: numpy.lexsort((intArr1, intArr2)) > Out[6]: array([0, 1, 2, 3, 4, 5, 6, 7]) > > This, however, looks like a bug to me. Here I sort an int array and > a str array. As expected charArray dominates intArr1 but the items > with the same charArray values are sorted *backwards* according to > intArr1 > In [5]: numpy.lexsort((intArr1, charArr)) > Out[5]: array([2, 1, 0, 5, 4, 3, 6, 7]) > > Is this a bug or am I missing something? Looks like a bug. In [12]: numpy.argsort([charArr], kind='m') Out[12]: array([[2, 1, 0, 5, 4, 3, 6, 7]]) In [13]: numpy.argsort([intArr2], kind='m') Out[13]: array([[0, 1, 2, 3, 4, 5, 6, 7]]) Both of these are stable sorts, and since the elements are in order should return [[0, 1, 2, 3, 4, 5, 6, 7]]. Actually, I think they should return [0, 1, 2, 3, 4, 5, 6, 7], I'm not sure why the returned array is 2D and I suspect that is a bug also. As to why the string array sorts incorrectly, I am not sure. It could be that the sort isn't stable, there could be a stride error, or the comparison is returning wrong values. My bet is on the first being the case. Please file a ticket on this. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Jun 26 20:53:18 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 26 Jun 2007 18:53:18 -0600 Subject: [Numpy-discussion] bug in lexsort with two different dtypes? In-Reply-To: References: Message-ID: On 6/26/07, Charles R Harris wrote: > > > > On 6/26/07, Tom Denniston wrote: > > > > In [1]: intArr1 = numpy.array([ 0, 1, 2,-2,-1, 5,-5,-5]) > > In [2]: intArr2 = numpy.array([1,1,1,2,2,2,3,4]) > > In [3]: charArr = numpy.array(['a','a','a','b','b','b','c','d']) > > > > Here I sort two int arrays. As expected intArr2 dominates intArr1 but > > the items with the same intArr2 values are sorted forwards according > > to intArr1 > > In [6]: numpy.lexsort((intArr1, intArr2)) > > Out[6]: array([0, 1, 2, 3, 4, 5, 6, 7]) > > > > This, however, looks like a bug to me. Here I sort an int array and > > a str array. As expected charArray dominates intArr1 but the items > > with the same charArray values are sorted *backwards* according to > > intArr1 > > In [5]: numpy.lexsort((intArr1, charArr)) > > Out[5]: array([2, 1, 0, 5, 4, 3, 6, 7]) > > > > Is this a bug or am I missing something? > > > Looks like a bug. > > In [12]: numpy.argsort([charArr], kind='m') > Out[12]: array([[2, 1, 0, 5, 4, 3, 6, 7]]) > > In [13]: numpy.argsort([intArr2], kind='m') > Out[13]: array([[0, 1, 2, 3, 4, 5, 6, 7]]) > > Both of these are stable sorts, and since the elements are in order should > return [[0, 1, 2, 3, 4, 5, 6, 7]]. Actually, I think they should return [0, > 1, 2, 3, 4, 5, 6, 7], I'm not sure why the returned array is 2D and I > suspect that is a bug also. As to why the string array sorts incorrectly, I > am not sure. It could be that the sort isn't stable, there could be a stride > error, or the comparison is returning wrong values. My bet is on the first > being the case. > Nevermind the 2D thingee, that was pilot error in changing lexsort to argsort, charArr should not be in a list: In [25]: numpy.argsort(charArr, kind='m', axis=0) Out[25]: array([2, 1, 0, 5, 4, 3, 6, 7]) Works just fine. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.denniston at alum.dartmouth.org Thu Jun 28 12:03:33 2007 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 28 Jun 2007 11:03:33 -0500 Subject: [Numpy-discussion] segfault caused by incorrect Py_DECREF in ufunc Message-ID: Below is the code around line 900 for ufuncobject.c (http://svn.scipy.org/svn/numpy/trunk/numpy/core/src/ufuncobject.c) There is a decref labeled with ">>>" below that is incorrect. As per the python documentation (http://docs.python.org/api/dictObjects.html): #PyObject* PyDict_GetItem( PyObject *p, PyObject *key) # #Return value: Borrowed reference. #Return the object from dictionary p which has a key key. Return NULL if the key #key is not present, but without setting an exception. PyDict_GetItem returns a borrowed reference. Therefore this code does not own the contents to which the obj pointer points and should not decref on it. Simply removing the Py_DECREF(obj) line gets rid of the segfault. I was wondering if someone could confirm that my interpretation is correct and remove the line. I don't have access to the svn or know how to change it. Most people do not see this problem because it only affects user defined types. --Tom if (userdef > 0) { PyObject *key, *obj; int ret; obj = NULL; key = PyInt_FromLong((long) userdef); if (key == NULL) return -1; obj = PyDict_GetItem(self->userloops, key); Py_DECREF(key); if (obj == NULL) { PyErr_SetString(PyExc_TypeError, "user-defined type used in ufunc" \ " with no registered loops"); return -1; } /* extract the correct function data and argtypes */ ret = _find_matching_userloop(obj, arg_types, scalars, function, data, self->nargs, self->nin); >>> Py_DECREF(obj); return ret; } From ollinger at gmail.com Thu Jun 28 16:09:32 2007 From: ollinger at gmail.com (John Ollinger) Date: Thu, 28 Jun 2007 20:09:32 +0000 (UTC) Subject: [Numpy-discussion] =?utf-8?q?Unhandled_floating_point_exception?= =?utf-8?q?=09running=09test_in_numpy-1=2E0=2E3_and_svn_3875?= References: <20070623205450.GB2578@mentat.za.net> <20070623230650.GU4853@x2.nosyntax.com> <20070624155527.GC4585@x2.nosyntax.com> <6.2.1.2.2.20070624150441.22022f20@pop.gmail.com> <20070624210512.GA4694@x2.nosyntax.com> Message-ID: rex nosyntax.com> writes: > > > There doesn't appear to be a problem with recent versions of the > software. In particular, ATLAS 3.7.33 does not cause an error. > > Is there some reason for you to use such old software? (gcc 3.3.1 & > kernel 2.4.21)? What platform are you building for? > > -rex > I think have resolved this problem. First off, I building on such an old system because the applications I write are used at a number of labs across the campus and most of these labs are in nontechnical departments with minimal linux support. As a result, the usually run the version that was current when the machine was purchased. If I want people to use my code, I have to supply them with a tar file that contains everything they need, including the dependencies for numpy, scipy, wxPython, vtk and itk. I try to make a general build on the oldest version I have access to in case I miss something. The motherboard on my desktop died last week, so I was forced to use the older system for a couple of weeks, which is what prompted me to update numpy from numeric. The floating exception is definitely not caused by the numpy or scipy builds. The same builds run correctly on one of our newer systems (2.6.9). I rebuilt everything on my desktop (including gcc. The new box on my desk is now running 2.6.28 with gcc 4.1, so I had to build gcc 3.6 anyway). The new build has the Floating point exception, but in a different, later test (three tests after the matvec test). Then I rebuilt a new version of gcc (3.6 rather than 3.3 and built numpy again. The floating point exception still occurred but this time at the cdouble test, the third from last. The fact that the build runs find with nonoptimized lapack libraries made me wonder about the threading support. I found an article at http://www.ibm.com/developerworks/eserver/library/es-033104.html which said that SUSE 2.4.21 used a backported version of the new threading package in version 2.6. The exceptions always occur on complex operations, so it isn't a stretch to assume that threads are in play when they occur. John p.s. My desktop is now running selinux, which is denying access to the numpy shareable libries. The error message is "cannot restore segment prot after reloc". The numpy setup script should probably set the context for the libraries. I am going to post this on a separate thread since other people will probably be encountering it. For those googling this, the command is "chcon -t texrel_shlib_t " From gzhu at peak6.com Thu Jun 28 16:24:03 2007 From: gzhu at peak6.com (Geoffrey Zhu) Date: Thu, 28 Jun 2007 15:24:03 -0500 Subject: [Numpy-discussion] How is NumPy implemented? References: <20070623205450.GB2578@mentat.za.net><20070623230650.GU4853@x2.nosyntax.com><20070624155527.GC4585@x2.nosyntax.com><6.2.1.2.2.20070624150441.22022f20@pop.gmail.com><20070624210512.GA4694@x2.nosyntax.com> Message-ID: <99F81FFD0EA54E4DA8D4F1BFE272F34105099A56@ppi-mail1.chicago.peak6.net> Hi All, I am curious how numpy is implemented. Syntax such as x[10::-2] is completely foreign to Python. How does numpy get Python to support it? Thanks, Geoffrey PS. Ignore the disclaimer. The mail server automatically insert that. _______________________________________________________=0A= =0A= The information in this email or in any file attached=0A= hereto is intended only for the personal and confiden-=0A= tial use of the individual or entity to which it is=0A= addressed and may contain information that is propri-=0A= etary and confidential. If you are not the intended=0A= recipient of this message you are hereby notified that=0A= any review, dissemination, distribution or copying of=0A= this message is strictly prohibited. This communica-=0A= tion is for information purposes only and should not=0A= be regarded as an offer to sell or as a solicitation=0A= of an offer to buy any financial product. Email trans-=0A= mission cannot be guaranteed to be secure or error-=0A= free. P6070214 From eike.welk at gmx.net Thu Jun 28 16:27:35 2007 From: eike.welk at gmx.net (Eike Welk) Date: Thu, 28 Jun 2007 22:27:35 +0200 Subject: [Numpy-discussion] [ANN]New numpy, scipy and atlas rpms for FC 5, 6 and 7 and openSUSE (with 64 bits arch support) In-Reply-To: <467E500A.20800@ar.media.kyoto-u.ac.jp> References: <467E500A.20800@ar.media.kyoto-u.ac.jp> Message-ID: <200706282227.36237.eike.welk@gmx.net> On Sunday 24 June 2007 13:05, David Cournapeau wrote: > Hi there, > > After quite some pain, I finally managed to build a LAPACK + > ATLAS rpm useful for numpy and scipy. Read the following if you ------- snip -------------------------------------------------- > and lapack. I would like to hear people complains. If people want > other distributions supported by the opensuse build system (such as > mandriva), I would like to hear it too. I tried your repository (http://software.opensuse.org/download/home:/ashigabou/openSUSE_10.2/) with two machines running openSUSE 10.2: 1. AMD Athlon desktop 2. Pentium-M laptop. The repository works with Yast (installation program). The prebuilt packages work on both machines. They especially work with matplotlib from the http://repos.opensuse.org/science/ repository. (I didn't try timers and testers.) Building Atlas succeeds on the Pentium-M (2). On the Athlon (1) the sanity checks fail. The resulting Atlas RPM is missing two links: /usr/lib/atlas/sse2/libblas.so.3 -> /usr/lib/atlas/sse2/libblas.so.3.0 /usr/lib/atlas/sse2/liblapack.so.3 -> /usr/lib/atlas/sse2/liblapack.so.3.0 When I try your one line examples, I see a nine times speedup with Atlas. Thank you for your efforts! You provide an easy way to install Atlas on Suse for the first time. Regards, Eike. PS.: I still think you should contribute to the http://repos.opensuse.org/science/ repository. Then this quite comprehensive repository would get decent Blas and Atlas too. From matthieu.brucher at gmail.com Thu Jun 28 16:32:14 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 28 Jun 2007 22:32:14 +0200 Subject: [Numpy-discussion] How is NumPy implemented? In-Reply-To: <99F81FFD0EA54E4DA8D4F1BFE272F34105099A56@ppi-mail1.chicago.peak6.net> References: <20070623205450.GB2578@mentat.za.net> <20070623230650.GU4853@x2.nosyntax.com> <20070624155527.GC4585@x2.nosyntax.com> <6.2.1.2.2.20070624150441.22022f20@pop.gmail.com> <20070624210512.GA4694@x2.nosyntax.com> <99F81FFD0EA54E4DA8D4F1BFE272F34105099A56@ppi-mail1.chicago.peak6.net> Message-ID: 2007/6/28, Geoffrey Zhu : > > Hi All, > > I am curious how numpy is implemented. Syntax such as x[10::-2] is > completely foreign to Python. This syntax is supported by Python, it is a slice, and it is the same syntax for lists. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Jun 28 16:33:21 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 28 Jun 2007 14:33:21 -0600 Subject: [Numpy-discussion] How is NumPy implemented? In-Reply-To: <99F81FFD0EA54E4DA8D4F1BFE272F34105099A56@ppi-mail1.chicago.peak6.net> References: <20070623205450.GB2578@mentat.za.net> <20070623230650.GU4853@x2.nosyntax.com> <20070624155527.GC4585@x2.nosyntax.com> <6.2.1.2.2.20070624150441.22022f20@pop.gmail.com> <20070624210512.GA4694@x2.nosyntax.com> <99F81FFD0EA54E4DA8D4F1BFE272F34105099A56@ppi-mail1.chicago.peak6.net> Message-ID: On 6/28/07, Geoffrey Zhu wrote: > Hi All, > > I am curious how numpy is implemented. Syntax such as x[10::-2] is > completely foreign to Python. How does numpy get Python to support it? Completely foreign to Python? In [1]: x="Geoffrey Zhu" In [2]: x[10::-2] Out[2]: 'h efoG' Extended slicing has been part of the language for many years now. Cheers, f From tom.denniston at alum.dartmouth.org Thu Jun 28 16:33:21 2007 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Thu, 28 Jun 2007 15:33:21 -0500 Subject: [Numpy-discussion] How is NumPy implemented? In-Reply-To: <99F81FFD0EA54E4DA8D4F1BFE272F34105099A56@ppi-mail1.chicago.peak6.net> References: <20070623205450.GB2578@mentat.za.net> <20070623230650.GU4853@x2.nosyntax.com> <20070624155527.GC4585@x2.nosyntax.com> <6.2.1.2.2.20070624150441.22022f20@pop.gmail.com> <20070624210512.GA4694@x2.nosyntax.com> <99F81FFD0EA54E4DA8D4F1BFE272F34105099A56@ppi-mail1.chicago.peak6.net> Message-ID: That is normal python syntax. It works with lists. What is slightly unusual is the multi-dimensional slicing as in arr[:,10:20]. However, this is governed by the way python translates bracket[] index calls to the __getitem__ and __getslice__ methods. You can try it out yourself in ipython or your favorite interpretter by writing the following class: In [4]: class GetItemInspect(object): ...: def __getitem__(self, slices): ...: print slices ...: ...: In [5]: GetItemInspect()[:,:,10:] (slice(None, None, None), slice(None, None, None), slice(10, None, None)) This allows you to play with the translation semantics. I'm sure they are also documented somewhere but I usually find trying many examples helpful. --Tom On 6/28/07, Geoffrey Zhu wrote: > Hi All, > > I am curious how numpy is implemented. Syntax such as x[10::-2] is > completely foreign to Python. How does numpy get Python to support it? > > Thanks, > Geoffrey > > PS. Ignore the disclaimer. The mail server automatically insert that. > > _______________________________________________________=0A= > =0A= > The information in this email or in any file attached=0A= > hereto is intended only for the personal and confiden-=0A= > tial use of the individual or entity to which it is=0A= > addressed and may contain information that is propri-=0A= > etary and confidential. If you are not the intended=0A= > recipient of this message you are hereby notified that=0A= > any review, dissemination, distribution or copying of=0A= > this message is strictly prohibited. This communica-=0A= > tion is for information purposes only and should not=0A= > be regarded as an offer to sell or as a solicitation=0A= > of an offer to buy any financial product. Email trans-=0A= > mission cannot be guaranteed to be secure or error-=0A= > free. P6070214 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From strawman at astraw.com Thu Jun 28 16:56:40 2007 From: strawman at astraw.com (Andrew Straw) Date: Thu, 28 Jun 2007 13:56:40 -0700 Subject: [Numpy-discussion] Unhandled floating point exception running test in numpy-1.0.3 and svn 3875 In-Reply-To: References: <20070623205450.GB2578@mentat.za.net> <20070623230650.GU4853@x2.nosyntax.com> <20070624155527.GC4585@x2.nosyntax.com> <6.2.1.2.2.20070624150441.22022f20@pop.gmail.com> <20070624210512.GA4694@x2.nosyntax.com> Message-ID: <46842088.1040002@astraw.com> john, there was a bug that made it into debian sarge whereby a SIGFPE wasn't trapped in the appropriate place and ended up causing problems similar to what you describe. the difficulty in debugging is that you're after whatever triggers the FPE in the first place (or the bug that lets it go untrapped), but you only get notified when the kernel kills your program for not trapping it. I wrote a little page about the Debian sarge issue: http://code.astraw.com/debian_sarge_libc.html John Ollinger wrote: > rex nosyntax.com> writes: > > >> There doesn't appear to be a problem with recent versions of the >> software. In particular, ATLAS 3.7.33 does not cause an error. >> >> Is there some reason for you to use such old software? (gcc 3.3.1 & >> kernel 2.4.21)? What platform are you building for? >> >> -rex >> > > > I think have resolved this problem. First off, I building > on such an old system because the applications I write are > used at a number of labs across the campus and most of these > labs are in nontechnical departments with minimal linux support. > As a result, the usually run the version that was current > when the machine was purchased. If I want people to use my > code, I have to supply them with a tar file that contains > everything they need, including the dependencies for numpy, > scipy, wxPython, vtk and itk. I try to make a general > build on the oldest version I have access to in case I miss > something. The motherboard on my desktop died last week, > so I was forced to use the older system for a couple of weeks, > which is what prompted me to update numpy from numeric. > > The floating exception is definitely not caused by the numpy > or scipy builds. The same builds run correctly on one of our > newer systems (2.6.9). I rebuilt everything on my desktop > (including gcc. The new box on my desk is now running > 2.6.28 with gcc 4.1, so I had to build gcc 3.6 anyway). > The new build has the Floating point exception, but in a > different, later test (three tests after the matvec test). > Then I rebuilt a new version of gcc (3.6 rather than 3.3 and > built numpy again. The floating point exception still > occurred but this time at the cdouble test, the third from last. > The fact that the build runs find with nonoptimized lapack > libraries made me wonder about the threading support. I > found an article at > http://www.ibm.com/developerworks/eserver/library/es-033104.html > which said that SUSE 2.4.21 used a backported version of the > new threading package in version 2.6. The exceptions always > occur on complex operations, so it isn't a stretch > to assume that threads are in play when they occur. > > John > > p.s. My desktop is now running selinux, which is denying access > to the numpy shareable libries. The error message is "cannot > restore segment prot after reloc". The numpy setup script > should probably set the context for the libraries. I am going to > post this on a separate thread since other people will > probably be encountering it. For those googling this, the command > is "chcon -t texrel_shlib_t " > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From gzhu at peak6.com Thu Jun 28 17:00:02 2007 From: gzhu at peak6.com (Geoffrey Zhu) Date: Thu, 28 Jun 2007 16:00:02 -0500 Subject: [Numpy-discussion] How is NumPy implemented? References: <20070623205450.GB2578@mentat.za.net><20070623230650.GU4853@x2.nosyntax.com><20070624155527.GC4585@x2.nosyntax.com><6.2.1.2.2.20070624150441.22022f20@pop.gmail.com><20070624210512.GA4694@x2.nosyntax.com><99F81FFD0EA54E4DA8D4F1BFE272F34105099A56@ppi-mail1.chicago.peak6.net> Message-ID: <99F81FFD0EA54E4DA8D4F1BFE272F34105099A6B@ppi-mail1.chicago.peak6.net> I see. Thanks a lot. -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of Tom Denniston Sent: Thursday, June 28, 2007 3:33 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] How is NumPy implemented? That is normal python syntax. It works with lists. What is slightly unusual is the multi-dimensional slicing as in arr[:,10:20]. However, this is governed by the way python translates bracket[] index calls to the __getitem__ and __getslice__ methods. You can try it out yourself in ipython or your favorite interpretter by writing the following class: In [4]: class GetItemInspect(object): ...: def __getitem__(self, slices): ...: print slices ...: ...: In [5]: GetItemInspect()[:,:,10:] (slice(None, None, None), slice(None, None, None), slice(10, None, None)) This allows you to play with the translation semantics. I'm sure they are also documented somewhere but I usually find trying many examples helpful. --Tom On 6/28/07, Geoffrey Zhu wrote: > Hi All, > > I am curious how numpy is implemented. Syntax such as x[10::-2] is > completely foreign to Python. How does numpy get Python to support it? > > Thanks, > Geoffrey > > PS. Ignore the disclaimer. The mail server automatically insert that. > > _______________________________________________________=0A= > =0A= > The information in this email or in any file attached=0A= hereto is > intended only for the personal and confiden-=0A= tial use of the > individual or entity to which it is=0A= addressed and may contain > information that is propri-=0A= etary and confidential. If you are > not the intended=0A= recipient of this message you are hereby notified > that=0A= any review, dissemination, distribution or copying of=0A= > this message is strictly prohibited. This communica-=0A= tion is > for information purposes only and should not=0A= be regarded as an > offer to sell or as a solicitation=0A= of an offer to buy any > financial product. Email trans-=0A= mission cannot be guaranteed to be > secure or error-=0A= free. P6070214 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion From mathewww at charter.net Thu Jun 28 21:18:26 2007 From: mathewww at charter.net (Mathew Yeates) Date: Thu, 28 Jun 2007 18:18:26 -0700 Subject: [Numpy-discussion] baffled by gfortran Message-ID: <46845DE2.7070001@charter.net> I have gfortran installed in my path. But when I run python setup.py build I get Found executable /usr/bin/g77 gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found The output of python setup.py config_fc --help-fcompiler shows that gfortran is visible Gnu95FCompiler instance properties: archiver = ['/u/vento0/myeates/bin/gfortran', '-cr'] compile_switch = '-c' compiler_f77 = ['/u/vento0/myeates/bin/gfortran', '-Wall', '-ffixed- form', '-fno-second-underscore', '-fPIC', '-O3', '-funroll -loops', '-march=opteron', '-mmmx', '-m3dnow', '-msse2', ' -msse'] compiler_f90 = ['/u/vento0/myeates/bin/gfortran', '-Wall', '-fno-second -underscore', '-fPIC', '-O3', '-funroll-loops', '- I have tried python setup.py config_fc --fcompiler=gnu95 build etc but I can't figure it out. Can some examples be added to the readme file? From david at ar.media.kyoto-u.ac.jp Thu Jun 28 23:15:24 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 29 Jun 2007 12:15:24 +0900 Subject: [Numpy-discussion] [ANN]New numpy, scipy and atlas rpms for FC 5, 6 and 7 and openSUSE (with 64 bits arch support) In-Reply-To: <200706282227.36237.eike.welk@gmx.net> References: <467E500A.20800@ar.media.kyoto-u.ac.jp> <200706282227.36237.eike.welk@gmx.net> Message-ID: <4684794C.9060003@ar.media.kyoto-u.ac.jp> Eike Welk wrote: > On Sunday 24 June 2007 13:05, David Cournapeau wrote: >> Hi there, >> >> After quite some pain, I finally managed to build a LAPACK + >> ATLAS rpm useful for numpy and scipy. Read the following if you > > ------- snip -------------------------------------------------- > >> and lapack. I would like to hear people complains. If people want >> other distributions supported by the opensuse build system (such as >> mandriva), I would like to hear it too. > > I tried your repository > (http://software.opensuse.org/download/home:/ashigabou/openSUSE_10.2/) > with two machines running openSUSE 10.2: > 1. AMD Athlon desktop > 2. Pentium-M laptop. > > The repository works with Yast (installation program). > The prebuilt packages work on both machines. They especially work with > matplotlib from the http://repos.opensuse.org/science/ repository. (I > didn't try timers and testers.) > > Building Atlas succeeds on the Pentium-M (2). On the Athlon (1) the > sanity checks fail. > > The resulting Atlas RPM is missing two links: > /usr/lib/atlas/sse2/libblas.so.3 -> /usr/lib/atlas/sse2/libblas.so.3.0 > /usr/lib/atlas/sse2/liblapack.so.3 -> /usr/lib/atlas/sse2/liblapack.so.3.0 The atlas rpm is still really rough on the edges. Ideally, you should not need LD_LIBRARY_PATH, and I should update the loader (the software which looks for shared libraries when launchdng a program) cache, but if I make a mistake at this point, this has the potential to screw up the whole machine.... Could you give me more information on the AMD failure ? Are you using 64 bits mode ? > > When I try your one line examples, I see a nine times speedup with > Atlas. > > Thank you for your efforts! You provide an easy way to install Atlas > on Suse for the first time. > > Regards, > Eike. > > PS.: > I still think you should contribute to the > http://repos.opensuse.org/science/ repository. Then this quite > comprehensive repository would get decent Blas and Atlas too. This is planned, and I already took contact with them a few weeks ago, but I didn't have time to do it properly yet. David From cameron.walsh.lists at gmail.com Fri Jun 29 02:47:27 2007 From: cameron.walsh.lists at gmail.com (Cameron Walsh) Date: Fri, 29 Jun 2007 14:47:27 +0800 Subject: [Numpy-discussion] Python equivalent of bwboundaries, bwlabel Message-ID: <52c649fc0706282347g43871d1cw47f7c62ebc41e299@mail.gmail.com> Hi all, I'm trying to do object segmentation from image slices, and have found the matlab functions bwlabel and bwboundaries. I haven't been able to find a python equivalent in the pylab, scipy, numpy, or Image modules, nor has google been fruitful. Could somebody point me in the right direction to find equivalent functions if they exist? Cameron. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadavh at visionsense.com Fri Jun 29 05:15:10 2007 From: nadavh at visionsense.com (Nadav Horesh) Date: Fri, 29 Jun 2007 12:15:10 +0300 Subject: [Numpy-discussion] Python equivalent of bwboundaries, bwlabel Message-ID: <07C6A61102C94148B8104D42DE95F7E8C8F2B1@exchange2k.envision.co.il> There are several image processing function (including labeling) in numpy.numarray.nd_image package. You can find nd_image documentation in numarray-1.5.pdf (http://downloads.sourceforge.net/numpy/numarray-1.5.pdf?modtime=1133880381&big_mirror=0) Nadav. -----Original Message----- From: numpy-discussion-bounces at scipy.org on behalf of Cameron Walsh Sent: Fri 29-Jun-07 09:47 To: numpy-discussion at scipy.org; image-sig at python.org Cc: Subject: [Numpy-discussion] Python equivalent of bwboundaries, bwlabel Hi all, I'm trying to do object segmentation from image slices, and have found the matlab functions bwlabel and bwboundaries. I haven't been able to find a python equivalent in the pylab, scipy, numpy, or Image modules, nor has google been fruitful. Could somebody point me in the right direction to find equivalent functions if they exist? Cameron. From rhoogerwerf at interactivesupercomputing.com Fri Jun 29 13:39:26 2007 From: rhoogerwerf at interactivesupercomputing.com (Ronnie Hoogerwerf) Date: Fri, 29 Jun 2007 13:39:26 -0400 Subject: [Numpy-discussion] Accelerate your Python code with parallel processing Message-ID: I am an Application Engineer at Interactive Supercomputing and we are rolling out a beta version of our Star-P product for Python. We are actively looking for computationally intensive Python application to port to Star-P. Star-P is a parallel application development platform that allows users to tap into the power and memory of supercomputers from the comfort of the favorite desktop applications, in this case Python. Star-P is capable of both fine-grained parallel computation and embarrassingly parallel computation. The fine-grained mode of our Star-P Python implementation has been modeled on the Python NumPy package - for example: x = starp.random.rand(20000,20000) y = starp.linalg.inv(x) instead of x = numpy.random.rand(20000,20000) y = numpy.linalg.inv(x) Where the first couple of lines are executed on the Star-P parallel server in full C/MPI mode and the last couple of lines are executed on the desktop using Python. The embarrassingly parallel mode is capable of executing any Python module, although input and output parameters are currently limited to NumPy arrays, scalars, and strings - for example: y = starp.ppeval(mymodule.dosomething,x) instead of for i in range(0,n): y[:,:,i] = mymodule.dosomething(x[:,i]) Where again in the former example the iterations are spread out over the available CPUs (note the abstraction - user need not worry regarding the number of CPUs) on the Star-P server using Python and in the latter the looping is doing in serial on the client using Python. We are looking for real Python application that you would be willing to share with us that we can port to Star-P. We want to use this experience as a basis for further improvements and development of our Python client. Thanks, Ronnie -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Fri Jun 29 13:46:19 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 29 Jun 2007 19:46:19 +0200 Subject: [Numpy-discussion] Accelerate your Python code with parallel processing In-Reply-To: References: Message-ID: Hi, Is there a comparison with parallel libraries that can be branched on numpy like MKL ? (and IPP for random number ?) Matthieu 2007/6/29, Ronnie Hoogerwerf : > > I am an Application Engineer at Interactive Supercomputing and we are > rolling out a beta version of our Star-P product for Python. We are actively > looking for computationally intensive Python application to port to Star-P. > Star-P is a parallel application development platform that allows users to > tap into the power and memory of supercomputers from the comfort of the > favorite desktop applications, in this case Python. > > > Star-P is capable of both fine-grained parallel computation and > embarrassingly parallel computation. The fine-grained mode of our > Star-P Python implementation has been modeled on the Python NumPy package - > for example: > > > x = starp.random.rand(20000,20000) > y = starp.linalg.inv(x) > > > instead of > > > x = numpy.random.rand(20000,20000) > y = numpy.linalg.inv(x) > > > Where the first couple of lines are executed on the Star-P parallel server > in full C/MPI mode and the last couple of lines are executed on the desktop > using Python. > > > The embarrassingly parallel mode is capable of executing > any Python module, although input and output parameters are currently > limited to NumPy arrays, scalars, and strings - for example: > > > y = starp.ppeval(mymodule.dosomething,x) > > > instead of > > > for i in range(0,n): > y[:,:,i] = mymodule.dosomething(x[:,i]) > > > Where again in the former example the iterations are spread out over the > available CPUs (note the abstraction - user need not worry regarding the > number of CPUs) on the Star-P server using Python and in the latter the > looping is doing in serial on the client using Python. > > > We are looking for real Python application that you would be willing to > share with us that we can port to Star-P. We want to use this experience as > a basis for further improvements and development of our Python client. > > > Thanks, > Ronnie > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhoogerwerf at interactivesupercomputing.com Fri Jun 29 14:15:20 2007 From: rhoogerwerf at interactivesupercomputing.com (Ronnie Hoogerwerf) Date: Fri, 29 Jun 2007 14:15:20 -0400 Subject: [Numpy-discussion] Accelerate your Python code with parallel processing In-Reply-To: References: Message-ID: Matthieu, linking in a multi-threading enabled library to numpy, like MKL, will provide speedup to only a limited set of operations and scale only on when you limit yourself to the cores in a single smp- like system. Star-P on the other hand, since it is MPI based, scales on cluster and covers a much larger set of operations that can be performed in parallel plus it provides a simple mechanism to do embarrassingly parallel operations. Ronnie On Jun 29, 2007, at 1:46 PM, Matthieu Brucher wrote: > Hi, > > Is there a comparison with parallel libraries that can be branched > on numpy like MKL ? (and IPP for random number ?) > > Matthieu > > 2007/6/29, Ronnie Hoogerwerf < > rhoogerwerf at interactivesupercomputing.com>: > I am an Application Engineer at Interactive Supercomputing and we > are rolling out a beta version of our Star-P product for Python. We > are actively looking for computationally intensive Python > application to port to Star-P. Star-P is a parallel application > development platform that allows users to tap into the power and > memory of supercomputers from the comfort of the favorite desktop > applications, in this case Python. > > Star-P is capable of both fine-grained parallel computation and > embarrassingly parallel computation. The fine-grained mode of our > Star-P Python implementation has been modeled on the Python NumPy > package - for example: > > x = starp.random.rand (20000,20000) > y = starp.linalg.inv(x) > > instead of > > x = numpy.random.rand(20000,20000) > y = numpy.linalg.inv(x) > > Where the first couple of lines are executed on the Star-P parallel > server in full C/MPI mode and the last couple of lines are executed > on the desktop using Python. > > The embarrassingly parallel mode is capable of executing any Python > module, although input and output parameters are currently limited > to NumPy arrays, scalars, and strings - for example: > > y = starp.ppeval(mymodule.dosomething,x) > > instead of > > for i in range(0,n): > y[:,:,i] = mymodule.dosomething(x[:,i]) > > Where again in the former example the iterations are spread out > over the available CPUs (note the abstraction - user need not worry > regarding the number of CPUs) on the Star-P server using Python and > in the latter the looping is doing in serial on the client using > Python. > > We are looking for real Python application that you would be > willing to share with us that we can port to Star-P. We want to > use this experience as a basis for further improvements and > development of our Python client. > > Thanks, > Ronnie > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From eike.welk at gmx.net Fri Jun 29 17:39:41 2007 From: eike.welk at gmx.net (Eike Welk) Date: Fri, 29 Jun 2007 23:39:41 +0200 Subject: [Numpy-discussion] [ANN]New numpy, scipy and atlas rpms for FC 5, 6 and 7 and openSUSE (with 64 bits arch support) In-Reply-To: <4684794C.9060003@ar.media.kyoto-u.ac.jp> References: <467E500A.20800@ar.media.kyoto-u.ac.jp> <200706282227.36237.eike.welk@gmx.net> <4684794C.9060003@ar.media.kyoto-u.ac.jp> Message-ID: <200706292339.41790.eike.welk@gmx.net> On Friday 29 June 2007 05:15, David Cournapeau wrote: > Could you give me more information on the AMD failure ? Are you > using 64 bits mode ? No, its an old Athlon XP. I'll send you the log directory and the output of "make test" in private mail. It's 0.5 mb compressed. What else do you want? The Atlas RPM does also expose a weakness in Suse's installation program: It is not displayed in Yast. I think, because there is only a source RPM. Regards, Eike. From b3i4old02 at sneakemail.com Sat Jun 30 06:45:14 2007 From: b3i4old02 at sneakemail.com (Michael Hoffman) Date: Sat, 30 Jun 2007 11:45:14 +0100 Subject: [Numpy-discussion] Building numpy 1.0.3-2 on Linux 2.6.8 i686 (Debian 3.1) Message-ID: Hi. I have been trying to build NumPy on a 32-bit Linux box using python setup.py build. I received the following errors: """ creating build/temp.linux-i686-2.5/numpy/linalg compile options: '-DATLAS_WITHOUT_LAPACK -DATLAS_INFO="\"3.6.0\"" -I/usr/include -Inumpy/core/include -Ibuild/src.linux-i686-2.5/numpy/core -I/nfs/acari/mh5/include/python2.5 -I/nfs/acari/mh5/include -I/software/python-2.5/include/python2.5 -Inumpy/core/src -Inumpy/core/include -c' gcc: numpy/linalg/lapack_litemodule.c g77 build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o -L/usr/lib/sse2 -L/usr/lib -lf77blas -lcblas -latlas -llapack -lg2c-pic -o build/lib.linux-i686-2.5/numpy/linalg/lapack_lite.so build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o(.text+0x44): In function `initlapack_lite': numpy/linalg/lapack_litemodule.c:827: undefined reference to `Py_InitModule4' build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o(.text+0x57):numpy/linalg/lapack_litemodule.c:831: undefined reference to `PyModule_GetDict' build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o(.text+0x73):numpy/linalg/lapack_litemodule.c:832: undefined reference to `PyErr_NewException' build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o(.text+0x8f):numpy/linalg/lapack_litemodule.c:833: undefined reference to `PyDict_SetItemString' build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o(.text+0xa1):numpy/linalg/lapack_litemodule.c:836: undefined reference to `PyErr_Print' build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o(.text+0xa7):numpy/linalg/lapack_litemodule.c:836: undefined reference to `PyExc_ImportError' build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o(.text+0xbb):numpy/linalg/lapack_litemodule.c:836: undefined reference to `PyErr_SetString' [...] /usr/lib/libfrtbegin.a(frtbegin.o)(.text+0x32): In function `main': : undefined reference to `MAIN__' collect2: ld returned 1 exit status error: Command "g77 build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o -L/usr/lib/sse2 -L/usr/lib -lf77blas -lcblas -latlas -llapack -lg2c-pic -o build/lib.linux-i686-2.5/numpy/linalg/lapack_lite.so" failed with exit status 1 """ After some Googling, I found a recommendation to unset CFLAGS and LDFLAGS, but they are not set in the transcript above. Any suggestions? -- Michael Hoffman From david at ar.media.kyoto-u.ac.jp Sat Jun 30 07:13:32 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 30 Jun 2007 20:13:32 +0900 Subject: [Numpy-discussion] Building numpy 1.0.3-2 on Linux 2.6.8 i686 (Debian 3.1) In-Reply-To: References: Message-ID: <46863ADC.6010409@ar.media.kyoto-u.ac.jp> Michael Hoffman wrote: > Hi. I have been trying to build NumPy on a 32-bit Linux box using python > setup.py build. I received the following errors: > > """ > creating build/temp.linux-i686-2.5/numpy/linalg > compile options: '-DATLAS_WITHOUT_LAPACK -DATLAS_INFO="\"3.6.0\"" > -I/usr/include -Inumpy/core/include > -Ibuild/src.linux-i686-2.5/numpy/core -I/nfs/acari/mh5/include/python2.5 > -I/nfs/acari/mh5/include -I/software/python-2.5/include/python2.5 > -Inumpy/core/src -Inumpy/core/include -c' > gcc: numpy/linalg/lapack_litemodule.c > g77 build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o > -L/usr/lib/sse2 -L/usr/lib -lf77blas -lcblas -latlas -llapack -lg2c-pic > -o build/lib.linux-i686-2.5/numpy/linalg/lapack_lite.so > build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o(.text+0x44): > In function `initlapack_lite': > numpy/linalg/lapack_litemodule.c:827: undefined reference to > `Py_InitModule4' > build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o(.text+0x57):numpy/linalg/lapack_litemodule.c:831: > undefined reference to `PyModule_GetDict' > build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o(.text+0x73):numpy/linalg/lapack_litemodule.c:832: > undefined reference to `PyErr_NewException' > build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o(.text+0x8f):numpy/linalg/lapack_litemodule.c:833: > undefined reference to `PyDict_SetItemString' > build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o(.text+0xa1):numpy/linalg/lapack_litemodule.c:836: > undefined reference to `PyErr_Print' > build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o(.text+0xa7):numpy/linalg/lapack_litemodule.c:836: > undefined reference to `PyExc_ImportError' > build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o(.text+0xbb):numpy/linalg/lapack_litemodule.c:836: > undefined reference to `PyErr_SetString' > [...] > /usr/lib/libfrtbegin.a(frtbegin.o)(.text+0x32): In function `main': > : undefined reference to `MAIN__' > collect2: ld returned 1 exit status > error: Command "g77 > build/temp.linux-i686-2.5/numpy/linalg/lapack_litemodule.o > -L/usr/lib/sse2 -L/usr/lib -lf77blas -lcblas -latlas -llapack -lg2c-pic > -o build/lib.linux-i686-2.5/numpy/linalg/lapack_lite.so" failed with > exit status 1 > """ > > After some Googling, I found a recommendation to unset CFLAGS and > LDFLAGS, but they are not set in the transcript above. > > Any suggestions? Which distribution are you building on ? David From openopt at ukr.net Sat Jun 30 08:19:16 2007 From: openopt at ukr.net (dmitrey) Date: Sat, 30 Jun 2007 15:19:16 +0300 Subject: [Numpy-discussion] Accelerate your Python code with parallel processing In-Reply-To: References: Message-ID: <46864A44.60304@ukr.net> I didn't find your python prices for Star-P. Or are there any chances for GPL/other free license for Python Star-P? Also, it would be interesting to see comparison numerical results of your product vs stackless python ( http://www.google.com.ua/search?q=stackless+python&ie=utf-8&oe=utf-8&aq=t&rls=com.ubuntu:en-US:official&client=firefox-a ) and/or all those paralles stuff http://www.scipy.org/Topical_Software#head-cf472934357fda4558aafdf558a977c4d59baecb All software developers say "our product is one of the best ones!" but I would prefer to see benchmark results before pay anything. Those mentioned stuff is free - and maybe difference with your Star-P in a cluster and/or for example my AMD Athlon X2 is insufficient? Regards, D. Ronnie Hoogerwerf wrote: > I am an Application Engineer at Interactive Supercomputing and we are > rolling out a beta version of our Star-P product for Python. We are > actively looking for computationally intensive Python application to > port to Star-P. Star-P is a parallel application development platform > that allows users to tap into the power and memory of supercomputers > from the comfort of the favorite desktop applications, in this case > Python. > > > > Star-P is capable of both fine-grained parallel computation and > embarrassingly parallel computation. The fine-grained mode of our > Star-P Python implementation has been modeled on the Python NumPy > package - for example: > > > > x = starp.random.rand(20000,20000) > y = starp.linalg.inv(x) > > > > instead of > > > > x = numpy.random.rand(20000,20000) > y = numpy.linalg.inv(x) > > > > Where the first couple of lines are executed on the Star-P parallel > server in full C/MPI mode and the last couple of lines are executed on > the desktop using Python. > > > > The embarrassingly parallel mode is capable of executing > any Python module, although input and output parameters are currently > limited to NumPy arrays, scalars, and strings - for example: > > > > y = starp.ppeval(mymodule.dosomething,x) > > > > instead of > > > > for i in range(0,n): > y[:,:,i] = mymodule.dosomething(x[:,i]) > > > > Where again in the former example the iterations are spread out over > the available CPUs (note the abstraction - user need not worry > regarding the number of CPUs) on the Star-P server using Python and in > the latter the looping is doing in serial on the client using Python. > > > > We are looking for real Python application that you would be willing > to share with us that we can port to Star-P. We want to use this > experience as a basis for further improvements and development of > our Python client. > > > > Thanks, > Ronnie > > > > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From matthieu.brucher at gmail.com Sat Jun 30 08:53:14 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 30 Jun 2007 14:53:14 +0200 Subject: [Numpy-discussion] Accelerate your Python code with parallel processing In-Reply-To: <46864A44.60304@ukr.net> References: <46864A44.60304@ukr.net> Message-ID: 2007/6/30, dmitrey : > > I didn't find your python prices for Star-P. Or are there any chances > for GPL/other free license for Python Star-P? I've found them, several k?. A link would have been great ( http://www.interactivesupercomputing.com/products/starpandpython.php) All software developers say "our product is one of the best ones!" but I > would prefer to see benchmark results before pay anything. > Same for me, but safe for the benchmarks provided on the web page, nothing, and no comparison with other products. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Jun 30 11:35:06 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 30 Jun 2007 09:35:06 -0600 Subject: [Numpy-discussion] bug in lexsort with two different dtypes? In-Reply-To: References: Message-ID: On 6/26/07, Charles R Harris wrote: > > > > On 6/26/07, Charles R Harris wrote: > > > > > > > > On 6/26/07, Tom Denniston < tom.denniston at alum.dartmouth.org> wrote: > > > > > > In [1]: intArr1 = numpy.array([ 0, 1, 2,-2,-1, 5,-5,-5]) > > > In [2]: intArr2 = numpy.array([1,1,1,2,2,2,3,4]) > > > In [3]: charArr = numpy.array(['a','a','a','b','b','b','c','d']) > > > > > > Here I sort two int arrays. As expected intArr2 dominates intArr1 but > > > the items with the same intArr2 values are sorted forwards according > > > to intArr1 > > > In [6]: numpy.lexsort((intArr1, intArr2)) > > > Out[6]: array([0, 1, 2, 3, 4, 5, 6, 7]) > > > > > > This, however, looks like a bug to me. Here I sort an int array and > > > a str array. As expected charArray dominates intArr1 but the items > > > with the same charArray values are sorted *backwards* according to > > > intArr1 > > > In [5]: numpy.lexsort((intArr1, charArr)) > > > Out[5]: array([2, 1, 0, 5, 4, 3, 6, 7]) > > > > > > Is this a bug or am I missing something? > > > > It was a bug. It is fixed in svn. Chuck Looks like a bug. > > > > In [12]: numpy.argsort([charArr], kind='m') > > Out[12]: array([[2, 1, 0, 5, 4, 3, 6, 7]]) > > > > In [13]: numpy.argsort([intArr2], kind='m') > > Out[13]: array([[0, 1, 2, 3, 4, 5, 6, 7]]) > > > > Both of these are stable sorts, and since the elements are in order > > should return [[0, 1, 2, 3, 4, 5, 6, 7]]. Actually, I think they should > > return [0, 1, 2, 3, 4, 5, 6, 7], I'm not sure why the returned array is 2D > > and I suspect that is a bug also. As to why the string array sorts > > incorrectly, I am not sure. It could be that the sort isn't stable, there > > could be a stride error, or the comparison is returning wrong values. My bet > > is on the first being the case. > > > > Nevermind the 2D thingee, that was pilot error in changing lexsort to > argsort, charArr should not be in a list: > > In [25]: numpy.argsort(charArr, kind='m', axis=0) > Out[25]: array([2, 1, 0, 5, 4, 3, 6, 7]) > > Works just fine. > > Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mathewww at charter.net Sat Jun 30 13:38:13 2007 From: mathewww at charter.net (Mathew Yeates) Date: Sat, 30 Jun 2007 10:38:13 -0700 Subject: [Numpy-discussion] how do I configure with gfortran Message-ID: <46869505.3080209@charter.net> Does anyone know how to run python setup.py build and have gfortran used? It is in my path. Mathew From robert.kern at gmail.com Sat Jun 30 14:51:48 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 30 Jun 2007 13:51:48 -0500 Subject: [Numpy-discussion] how do I configure with gfortran In-Reply-To: <46869505.3080209@charter.net> References: <46869505.3080209@charter.net> Message-ID: <4686A644.4040402@gmail.com> Mathew Yeates wrote: > Does anyone know how to run > python setup.py build > and have gfortran used? It is in my path. python setup.py config_fc --fcompiler=gnu95 build -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tom.denniston at alum.dartmouth.org Sat Jun 30 15:28:47 2007 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Sat, 30 Jun 2007 14:28:47 -0500 Subject: [Numpy-discussion] bug in lexsort with two different dtypes? In-Reply-To: References: Message-ID: thanks On 6/30/07, Charles R Harris wrote: > > > On 6/26/07, Charles R Harris wrote: > > > > > > > > > > On 6/26/07, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > > > > > > > > > > On 6/26/07, Tom Denniston < > tom.denniston at alum.dartmouth.org> wrote: > > > > In [1]: intArr1 = numpy.array([ 0, 1, 2,-2,-1, 5,-5,-5]) > > > > In [2]: intArr2 = numpy.array([1,1,1,2,2,2,3,4]) > > > > In [3]: charArr = > numpy.array(['a','a','a','b','b','b','c','d']) > > > > > > > > Here I sort two int arrays. As expected intArr2 dominates intArr1 but > > > > the items with the same intArr2 values are sorted forwards according > > > > to intArr1 > > > > In [6]: numpy.lexsort((intArr1, intArr2)) > > > > Out[6]: array([0, 1, 2, 3, 4, 5, 6, 7]) > > > > > > > > This, however, looks like a bug to me. Here I sort an int array and > > > > a str array. As expected charArray dominates intArr1 but the items > > > > with the same charArray values are sorted *backwards* according to > > > > intArr1 > > > > In [5]: numpy.lexsort((intArr1, charArr)) > > > > Out[5]: array([2, 1, 0, 5, 4, 3, 6, 7]) > > > > > > > > Is this a bug or am I missing something? > > > > > > > It was a bug. It is fixed in svn. > > Chuck > > > > > > > > > > > > > > > > Looks like a bug. > > > > > > In [12]: numpy.argsort([charArr], kind='m') > > > Out[12]: array([[2, 1, 0, 5, 4, 3, 6, 7]]) > > > > > > In [13]: numpy.argsort([intArr2], kind='m') > > > Out[13]: array([[0, 1, 2, 3, 4, 5, 6, 7]]) > > > > > > Both of these are stable sorts, and since the elements are in order > should return [[0, 1, 2, 3, 4, 5, 6, 7]]. Actually, I think they should > return [0, 1, 2, 3, 4, 5, 6, 7], I'm not sure why the returned array is 2D > and I suspect that is a bug also. As to why the string array sorts > incorrectly, I am not sure. It could be that the sort isn't stable, there > could be a stride error, or the comparison is returning wrong values. My bet > is on the first being the case. > > > > > > Nevermind the 2D thingee, that was pilot error in changing lexsort to > argsort, charArr should not be in a list: > > > > In [25]: numpy.argsort(charArr, kind='m', axis=0) > > Out[25]: array([2, 1, 0, 5, 4, 3, 6, 7]) > > > > Works just fine. > > > > Chuck > > > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > From mathewww at charter.net Sat Jun 30 15:39:08 2007 From: mathewww at charter.net (Mathew Yeates) Date: Sat, 30 Jun 2007 12:39:08 -0700 Subject: [Numpy-discussion] how do I configure with gfortran In-Reply-To: <4686A644.4040402@gmail.com> References: <46869505.3080209@charter.net> <4686A644.4040402@gmail.com> Message-ID: <4686B15C.4070507@charter.net> result Found executable /usr/bin/g77 gnu: no Fortran 90 compiler found Something is *broken*. Robert Kern wrote: > Mathew Yeates wrote: > >> Does anyone know how to run >> python setup.py build >> and have gfortran used? It is in my path. >> > > python setup.py config_fc --fcompiler=gnu95 build > > From robert.kern at gmail.com Sat Jun 30 16:00:32 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 30 Jun 2007 15:00:32 -0500 Subject: [Numpy-discussion] how do I configure with gfortran In-Reply-To: <4686B15C.4070507@charter.net> References: <46869505.3080209@charter.net> <4686A644.4040402@gmail.com> <4686B15C.4070507@charter.net> Message-ID: <4686B660.6000005@gmail.com> Mathew Yeates wrote: > result > Found executable /usr/bin/g77 > gnu: no Fortran 90 compiler found > > Something is *broken*. Then please provide us with enough information to help you. What platform are you on? Exactly what command did you execute? Exactly what output did you get (please copy-and-paste or redirect the output to a file)? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Jun 30 16:00:51 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 30 Jun 2007 15:00:51 -0500 Subject: [Numpy-discussion] how do I configure with gfortran In-Reply-To: <4686B15C.4070507@charter.net> References: <46869505.3080209@charter.net> <4686A644.4040402@gmail.com> <4686B15C.4070507@charter.net> Message-ID: <4686B673.1030908@gmail.com> Mathew Yeates wrote: > result > Found executable /usr/bin/g77 > gnu: no Fortran 90 compiler found > > Something is *broken*. Also, what version of numpy are you using? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From chanley at stsci.edu Sat Jun 30 16:32:01 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Sat, 30 Jun 2007 16:32:01 -0400 Subject: [Numpy-discussion] how do I configure with gfortran In-Reply-To: <4686B673.1030908@gmail.com> References: <46869505.3080209@charter.net> <4686A644.4040402@gmail.com> <4686B15C.4070507@charter.net> <4686B673.1030908@gmail.com> Message-ID: <4686BDC1.9070704@stsci.edu> I have found that setting my F77 environment variable to gfortran is also sufficient. > setenv F77 gfortran > python setup.py install Chris From mathewww at charter.net Sat Jun 30 16:33:26 2007 From: mathewww at charter.net (Mathew Yeates) Date: Sat, 30 Jun 2007 13:33:26 -0700 Subject: [Numpy-discussion] how do I configure with gfortran In-Reply-To: <4686B660.6000005@gmail.com> References: <46869505.3080209@charter.net> <4686A644.4040402@gmail.com> <4686B15C.4070507@charter.net> <4686B660.6000005@gmail.com> Message-ID: <4686BE16.5050801@charter.net> Thanks for anyones help. I've been trying to figure this out for some time now. Stepping through distutils code is a bummer. -bash-3.1$ uname -a Linux mu.jpl.nasa.gov 2.6.17-5mdv #1 SMP Wed Sep 13 14:28:02 EDT 2006 x86_64 Dual-Core AMD Opteron(tm) Processor 2220 SE GNU/Linux -bash-3.1$ gfortran gfortran: no input files -bash-3.1$ which gfortran /u/vento0/myeates/bin/gfortran -bash-3.1$ gfortran -v Using built-in specs. Target: x86_64-unknown-linux-gnu Configured with: ../gcc-4.2.0/configure --with-mpfr=/u/vento0/myeates/ --with-gmp=/u/vento0/myeates/ --enable-languages=c,fortran --prefix=/u/vento0/myeates Thread model: posix gcc version 4.2.0 > -bash-3.1$ python setup.py config_fc --fcompiler=gnu95 build 2>&1 |tee out Robert Kern wrote: > Mathew Yeates wrote: > >> result >> Found executable /usr/bin/g77 >> gnu: no Fortran 90 compiler found >> >> Something is *broken*. >> > > Then please provide us with enough information to help you. What platform are > you on? Exactly what command did you execute? Exactly what output did you get > (please copy-and-paste or redirect the output to a file)? > > From john.c.cartwright at comcast.net Sat Jun 30 16:38:10 2007 From: john.c.cartwright at comcast.net (John Cartwright) Date: Sat, 30 Jun 2007 14:38:10 -0600 Subject: [Numpy-discussion] problem compiling v.1.0.3 on a Mac Message-ID: <4D239651-778E-4CB6-A0CD-A0E48A53B6FE@comcast.net> Hello All, I'm having trouble compile on a Mac 10.4.10. It seems as if it's not finding /usr/include: ... from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:81, from _configtest.c:2: /usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory ... I tried setting the "CFLAG=-I/usr/include", but w/o success. Can anyone help me? Thanks! --john From robert.kern at gmail.com Sat Jun 30 16:38:19 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 30 Jun 2007 15:38:19 -0500 Subject: [Numpy-discussion] how do I configure with gfortran In-Reply-To: <4686BE16.5050801@charter.net> References: <46869505.3080209@charter.net> <4686A644.4040402@gmail.com> <4686B15C.4070507@charter.net> <4686B660.6000005@gmail.com> <4686BE16.5050801@charter.net> Message-ID: <4686BF3B.7040506@gmail.com> Mathew Yeates wrote: >> -bash-3.1$ python setup.py config_fc --fcompiler=gnu95 build 2>&1 |tee out Did you forget to attach a file? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Jun 30 16:42:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 30 Jun 2007 15:42:50 -0500 Subject: [Numpy-discussion] problem compiling v.1.0.3 on a Mac In-Reply-To: <4D239651-778E-4CB6-A0CD-A0E48A53B6FE@comcast.net> References: <4D239651-778E-4CB6-A0CD-A0E48A53B6FE@comcast.net> Message-ID: <4686C04A.4020001@gmail.com> John Cartwright wrote: > Hello All, > > I'm having trouble compile on a Mac 10.4.10. It seems as if it's > not finding /usr/include: > > ... > from /Library/Frameworks/Python.framework/Versions/ > 2.4/include/python2.4/Python.h:81, > from _configtest.c:2: > /usr/include/stdarg.h:4:25: error: stdarg.h: No such file or directory > ... > > I tried setting the "CFLAG=-I/usr/include", but w/o success. Can > anyone help me? It should build out-of-box. Is this the standard Python distribution from www.python.org? Check your environment variables. You should not have CFLAGS or LDFLAGS; these will overwrite the flags that are necessary for building Python extension modules. If that doesn't work, please give us the complete output of $ python setup.py -v build Thanks. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Jun 30 16:43:10 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 30 Jun 2007 15:43:10 -0500 Subject: [Numpy-discussion] how do I configure with gfortran In-Reply-To: <4686BDC1.9070704@stsci.edu> References: <46869505.3080209@charter.net> <4686A644.4040402@gmail.com> <4686B15C.4070507@charter.net> <4686B673.1030908@gmail.com> <4686BDC1.9070704@stsci.edu> Message-ID: <4686C05E.4090801@gmail.com> Christopher Hanley wrote: > I have found that setting my F77 environment variable to gfortran is > also sufficient. > > > setenv F77 gfortran > > python setup.py install That might work okay for building scipy and other packages that only actually have FORTRAN-77 code; however, I suspect that Matthew is trying to build something with Fortran 90+. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mathewww at charter.net Sat Jun 30 16:52:07 2007 From: mathewww at charter.net (Mathew Yeates) Date: Sat, 30 Jun 2007 13:52:07 -0700 Subject: [Numpy-discussion] how do I configure with gfortran In-Reply-To: <4686BF3B.7040506@gmail.com> References: <46869505.3080209@charter.net> <4686A644.4040402@gmail.com> <4686B15C.4070507@charter.net> <4686B660.6000005@gmail.com> <4686BE16.5050801@charter.net> <4686BF3B.7040506@gmail.com> Message-ID: <4686C277.80005@charter.net> No. My PC crashed. I swear I have a virus on this machine. Been that kinda weekend Not particularly illuminating but here it is: Running from numpy source directory. F2PY Version 2_3875 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /u/vento0/myeates/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/u/vento0/myeates/lib'] language = c include_dirs = ['/u/vento0/myeates/include'] customize GnuFCompiler Found executable /usr/bin/g77 gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using config compiling '_configtest.c': Robert Kern wrote: > Mathew Yeates wrote: > > >>> -bash-3.1$ python setup.py config_fc --fcompiler=gnu95 build 2>&1 |tee out >>> > > Did you forget to attach a file? > > From mathewww at charter.net Sat Jun 30 16:59:30 2007 From: mathewww at charter.net (Mathew Yeates) Date: Sat, 30 Jun 2007 13:59:30 -0700 Subject: [Numpy-discussion] how do I configure with gfortran In-Reply-To: <4686C277.80005@charter.net> References: <46869505.3080209@charter.net> <4686A644.4040402@gmail.com> <4686B15C.4070507@charter.net> <4686B660.6000005@gmail.com> <4686BE16.5050801@charter.net> <4686BF3B.7040506@gmail.com> <4686C277.80005@charter.net> Message-ID: <4686C432.2010803@charter.net> More info: I tried Chris' suggestion , i.e. export F77=gfortran And now I get Found executable /u/vento0/myeates/bin/gfortran gnu: no Fortran 90 compiler found Found executable /usr/bin/g77 Mathew Yeates wrote: > No. > My PC crashed. I swear I have a virus on this machine. Been that kinda > weekend > > Not particularly illuminating but here it is: > Running from numpy source directory. > F2PY Version 2_3875 > blas_opt_info: > blas_mkl_info: > libraries mkl,vml,guide not found in /u/vento0/myeates/lib > NOT AVAILABLE > > atlas_blas_threads_info: > Setting PTATLAS=ATLAS > Setting PTATLAS=ATLAS > Setting PTATLAS=ATLAS > FOUND: > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/u/vento0/myeates/lib'] > language = c > include_dirs = ['/u/vento0/myeates/include'] > > customize GnuFCompiler > Found executable /usr/bin/g77 > gnu: no Fortran 90 compiler found > gnu: no Fortran 90 compiler found > customize GnuFCompiler > gnu: no Fortran 90 compiler found > gnu: no Fortran 90 compiler found > customize GnuFCompiler using config > compiling '_configtest.c': > > Robert Kern wrote: > >> Mathew Yeates wrote: >> >> >> >>>> -bash-3.1$ python setup.py config_fc --fcompiler=gnu95 build 2>&1 |tee out >>>> >>>> >> Did you forget to attach a file? >> >> >> > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > __________ NOD32 2365 (20070630) Information __________ > > This message was checked by NOD32 antivirus system. > http://www.eset.com > > > > From mathewww at charter.net Sat Jun 30 17:06:33 2007 From: mathewww at charter.net (Mathew Yeates) Date: Sat, 30 Jun 2007 14:06:33 -0700 Subject: [Numpy-discussion] how do I configure with gfortran In-Reply-To: <4686C432.2010803@charter.net> References: <46869505.3080209@charter.net> <4686A644.4040402@gmail.com> <4686B15C.4070507@charter.net> <4686B660.6000005@gmail.com> <4686BE16.5050801@charter.net> <4686BF3B.7040506@gmail.com> <4686C277.80005@charter.net> <4686C432.2010803@charter.net> Message-ID: <4686C5D9.1080303@charter.net> Even more info! I am using numpy gotten from svn on Wed or Thurs. Mathew Yeates wrote: > More info: > I tried Chris' suggestion , i.e. export F77=gfortran > > And now I get > > Found executable /u/vento0/myeates/bin/gfortran > gnu: no Fortran 90 compiler found > Found executable /usr/bin/g77 > > > > > Mathew Yeates wrote: > >> No. >> My PC crashed. I swear I have a virus on this machine. Been that kinda >> weekend >> >> Not particularly illuminating but here it is: >> Running from numpy source directory. >> F2PY Version 2_3875 >> blas_opt_info: >> blas_mkl_info: >> libraries mkl,vml,guide not found in /u/vento0/myeates/lib >> NOT AVAILABLE >> >> atlas_blas_threads_info: >> Setting PTATLAS=ATLAS >> Setting PTATLAS=ATLAS >> Setting PTATLAS=ATLAS >> FOUND: >> libraries = ['ptf77blas', 'ptcblas', 'atlas'] >> library_dirs = ['/u/vento0/myeates/lib'] >> language = c >> include_dirs = ['/u/vento0/myeates/include'] >> >> customize GnuFCompiler >> Found executable /usr/bin/g77 >> gnu: no Fortran 90 compiler found >> gnu: no Fortran 90 compiler found >> customize GnuFCompiler >> gnu: no Fortran 90 compiler found >> gnu: no Fortran 90 compiler found >> customize GnuFCompiler using config >> compiling '_configtest.c': >> >> Robert Kern wrote: >> >> >>> Mathew Yeates wrote: >>> >>> >>> >>> >>>>> -bash-3.1$ python setup.py config_fc --fcompiler=gnu95 build 2>&1 |tee out >>>>> >>>>> >>>>> >>> Did you forget to attach a file? >>> >>> >>> >>> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> >> __________ NOD32 2365 (20070630) Information __________ >> >> This message was checked by NOD32 antivirus system. >> http://www.eset.com >> >> >> >> >> > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > __________ NOD32 2365 (20070630) Information __________ > > This message was checked by NOD32 antivirus system. > http://www.eset.com > > > > From robert.kern at gmail.com Sat Jun 30 17:09:15 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 30 Jun 2007 16:09:15 -0500 Subject: [Numpy-discussion] how do I configure with gfortran In-Reply-To: <4686C432.2010803@charter.net> References: <46869505.3080209@charter.net> <4686A644.4040402@gmail.com> <4686B15C.4070507@charter.net> <4686B660.6000005@gmail.com> <4686BE16.5050801@charter.net> <4686BF3B.7040506@gmail.com> <4686C277.80005@charter.net> <4686C432.2010803@charter.net> Message-ID: <4686C67B.5030405@gmail.com> Mathew Yeates wrote: > More info: > I tried Chris' suggestion , i.e. export F77=gfortran > > And now I get > > Found executable /u/vento0/myeates/bin/gfortran > gnu: no Fortran 90 compiler found > Found executable /usr/bin/g77 Are you just trying to build numpy? Do you actually need a Fortran compiler at all? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Jun 30 17:12:31 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 30 Jun 2007 16:12:31 -0500 Subject: [Numpy-discussion] how do I configure with gfortran In-Reply-To: <4686C5D9.1080303@charter.net> References: <46869505.3080209@charter.net> <4686A644.4040402@gmail.com> <4686B15C.4070507@charter.net> <4686B660.6000005@gmail.com> <4686BE16.5050801@charter.net> <4686BF3B.7040506@gmail.com> <4686C277.80005@charter.net> <4686C432.2010803@charter.net> <4686C5D9.1080303@charter.net> Message-ID: <4686C73F.9030909@gmail.com> Mathew Yeates wrote: > Even more info! > I am using numpy gotten from svn on Wed or Thurs. Try to use numpy 1.0.3. There was a large set of changes to numpy.distutils after that release that have proven to be somewhat fragile. If 1.0.3 works, please enter a ticket into our Trac. Provide the information you've given here; include the complete output of $ python setup.py -v config_fc --fcompiler=gnu95 build and assign the ticket to "cookedm". -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From b3i4old02 at sneakemail.com Sat Jun 30 19:49:58 2007 From: b3i4old02 at sneakemail.com (Michael Hoffman) Date: Sun, 01 Jul 2007 00:49:58 +0100 Subject: [Numpy-discussion] Building numpy 1.0.3-2 on Linux 2.6.8 i686 (Debian 3.1) In-Reply-To: <46863ADC.6010409@ar.media.kyoto-u.ac.jp> References: <46863ADC.6010409@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau wrote: > Michael Hoffman wrote: >> Hi. I have been trying to build NumPy on a 32-bit Linux box using python >> setup.py build. I received the following errors: >> [...] > > Which distribution are you building on ? Which Linux distribution? Debian 3.1. -- Michael Hoffman From mpjohnson at gmail.com Mon Jun 25 17:08:05 2007 From: mpjohnson at gmail.com (mpjohnson at gmail.com) Date: Mon, 25 Jun 2007 21:08:05 -0000 Subject: [Numpy-discussion] scipy.test() warnings, errors and failures Message-ID: <1182805685.941953.31120@q69g2000hsb.googlegroups.com> Hi, I just installed numpy (1.0.3) and scipy (0.5.2) on a Windows machine running Python 2.5.1. They both complete installation, and numpy.test() reports no errors. scipy.test() produces a huge stream (see below) of warnings, errors (19), and failures (2), however. Also, there's a deprecation warning when I import scipy. Is this behavior normal? These appear to be the latest versions of everything. Thanks, Matt Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information. **************************************************************** Personal firewall software may warn about the connection IDLE makes to its subprocess using this computer's internal loopback interface. This connection is not visible on any external interface and no data is sent to or received from the Internet. **************************************************************** IDLE 1.2.1 >>> import numpy >>> import scipy Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\misc\__init__.py", line 25 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code >>> scipy.test() Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\ndimage\__init__.py", line 40 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\sparse\__init__.py", line 9 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\io\__init__.py", line 20 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\lib\__init__.py", line 5 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\linsolve\umfpack \__init__.py", line 7 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\linsolve\__init__.py", line 13 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\interpolate\__init__.py", line 15 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\optimize\__init__.py", line 17 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\linalg\__init__.py", line 32 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\special\__init__.py", line 22 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\stats\__init__.py", line 15 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\fftpack\__init__.py", line 21 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\integrate\__init__.py", line 16 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\lib\lapack\__init__.py", line 93 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\signal\__init__.py", line 17 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\lib\blas\__init__.py", line 61 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\maxentropy\__init__.py", line 12 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\__init__.py", line 77 return ScipyTest(scipy).test(level, verbosity) DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Found 1 tests for scipy.cluster.vq Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\fftpack\__init__.py", line 21 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Warning (from warnings module): File "C:\Python25\lib\site-packages\numpy\testing\numpytest.py", line 408 suite = obj(mthname) DeprecationWarning: ScipyTestCase is now called NumpyTestCase; please update your code Found 18 tests for scipy.fftpack.basic Found 4 tests for scipy.fftpack.helper Found 20 tests for scipy.fftpack.pseudo_diffs Found 1 tests for scipy.integrate Found 10 tests for scipy.integrate.quadpack Found 3 tests for scipy.integrate.quadrature Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\interpolate\__init__.py", line 15 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Found 6 tests for scipy.interpolate Found 6 tests for scipy.interpolate.fitpack Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\io\__init__.py", line 20 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Found 4 tests for scipy.io.array_import Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\io\tests\test_mio.py", line 28 super(test_mio_array, self).__init__(*args, **kwargs) DeprecationWarning: ScipyTestCase is now called NumpyTestCase; please update your code Found 28 tests for scipy.io.mio Found 12 tests for scipy.io.mmio Found 4 tests for scipy.io.recaster Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\lib\blas\__init__.py", line 61 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Found 16 tests for scipy.lib.blas Found 128 tests for scipy.lib.blas.fblas Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\lib\lapack\__init__.py", line 93 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Found 42 tests for scipy.lib.lapack Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\linalg\__init__.py", line 32 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Found 41 tests for scipy.linalg.basic Found 14 tests for scipy.linalg.blas Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\linalg\tests \test_decomp.py", line 117 ScipyTestCase.__init__(self, *args) DeprecationWarning: ScipyTestCase is now called NumpyTestCase; please update your code Found 53 tests for scipy.linalg.decomp Found 128 tests for scipy.linalg.fblas Found 4 tests for scipy.linalg.lapack Found 7 tests for scipy.linalg.matfuncs Warning: FAILURE importing tests for C:\Python25\Lib\site-packages\scipy\linsolve\umfpack\tests \test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ) Warning: FAILURE importing tests for C:\Python25\Lib\site-packages\scipy\linsolve\umfpack\tests \test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ) Found 2 tests for scipy.maxentropy Found 397 tests for scipy.ndimage Found 6 tests for scipy.optimize Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\optimize\__init__.py", line 17 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Found 1 tests for scipy.optimize.cobyla Found 4 tests for scipy.optimize.zeros Found 4 tests for scipy.signal.signaltools Found 96 tests for scipy.sparse Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\special\__init__.py", line 22 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Found 358 tests for scipy.special.basic Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\stats\__init__.py", line 15 test = ScipyTest().test DeprecationWarning: ScipyTest is now called NumpyTest; please update your code Found 98 tests for scipy.stats Found 70 tests for scipy.stats.distributions Found 10 tests for scipy.stats.morestats Found 0 tests for __main__ ...........................................Residual: 1.05006987327e-007 ...........Took 13 points. ......... Warning (from warnings module): File "C:\Python25\lib\site-packages\scipy\interpolate\fitpack2.py", line 457 warnings.warn(message) UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. ...... Don't worry about a warning regarding the number of bytes read. ....EEEE.E.EE.E.E.EE.E.E.E.EEEEE....................................................F......................................................................................................................................................................................................................................................................................................................................................................................................Result may be inaccurate, approximate err = 1.82697723188e-008 ...Result may be inaccurate, approximate err = 1.50259560743e-010 ..........................................................................................................................................................................................................................................................................................................................................................................................................................................Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 ................................................Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 ......................Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 ...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................F.Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ...... ====================================================================== ERROR: check loadmat case 3dmatrix ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case cell ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case cellnest ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case complex ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case double ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case emptycell ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case minus ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case multi ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case object ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case onechar ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case sparse ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case sparsecomplex ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case string ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case stringarray ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case struct ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case structarr ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case structnest ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case unicode ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 85, in cc self._check_case(name, files, expected) File "C:\Python25\Lib\site-packages\scipy\io\tests\test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "C:\Python25\Lib\site-packages\scipy\io\mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "C:\Python25\Lib\site-packages\scipy\io\miobase.py", line 269, in get_variables mdict = self.file_header() File "C:\Python25\Lib\site-packages\scipy\io\mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== FAIL: check_y_stride (scipy.lib.blas.tests.test_fblas.test_cgemv) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\lib\blas\tests \test_fblas.py", line 367, in check_y_stride assert_array_almost_equal(desired_y,y) File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 16.6666666667%) x: array([-5.39065742 +5.39065742j, 1. +1.j , 3.86268878 +0.13731122j, 3. +3.j , -9.09618664+17.09618759j, 5. +5.j ], dtype=complex64) y: array([-5.39065742 +5.39065742j, 1. +1.j , 3.86268878 +0.13731124j, 3. +3.j , -9.09618664+17.09618568j, 5. +5.j ], dtype=complex64) ====================================================================== FAIL: check_expon (scipy.stats.tests.test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\scipy\stats\tests \test_morestats.py", line 57, in check_expon assert_array_less(A, crit[-2:]) File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 235, in assert_array_less header='Arrays are not less-ordered') File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not less-ordered (mismatch 100.0%) x: array(1.9482886531944246) y: array([ 1.587, 1.934]) ---------------------------------------------------------------------- Ran 1596 tests in 20.720s FAILED (failures=2, errors=19) From kchristman54 at yahoo.com Wed Jun 27 00:33:52 2007 From: kchristman54 at yahoo.com (Kevin Christman) Date: Tue, 26 Jun 2007 21:33:52 -0700 (PDT) Subject: [Numpy-discussion] NumPy name? homepage? Message-ID: <502700.11212.qm@web38701.mail.mud.yahoo.com> I'm doing a quick cleanup of NumPy on wikipedia. 1. What is the proper name? NumPy or Numerical Python? (I understand that numarray and numeric are older, and now deprecated versions.) If it is NumPy then I would suggest renaming the page to that and then *only* talk about the other (confusing) names in the History of NumPy section. 2. What is the proper homepage for NumPy? Is it numpy.scipy.org or www.scipy.org/numpy or is there another? The page http://www.scipy.org/SciPyDotOrg references both. 3. What is the status of the "copyright" of the screenshots of SciPy/NumPy on the scipy homepage? The SciPy page on wikipedia probably needs a screenshot, or somebody needs to make one and upload it. Thanks, Kevin ____________________________________________________________________________________ TV dinner still cooling? Check out "Tonight's Picks" on Yahoo! TV. http://tv.yahoo.com/ From Alexander.Dietz at astro.cf.ac.uk Thu Jun 28 05:31:08 2007 From: Alexander.Dietz at astro.cf.ac.uk (Alexander Dietz) Date: Thu, 28 Jun 2007 10:31:08 +0100 Subject: [Numpy-discussion] Problems with RandomArray Message-ID: <9cf809a00706280231i1bc27212mcb12cc817df3c809@mail.gmail.com> Hi, I am not sure how to subscribe to this mailing list (there was no link for that, just this email adress), but hopefully someone will get this email and can me subscribe to this list, answer my question or ask someone else. Anyway, here is my question: I am using python with matplotlib version 0.90.1 and with numpy (as recommended), on a Linux box. So far matplotlib and numpy is working, but I need to use RandomArray! So, RandomArray can be found in the "Numerical Python" documentation and RandomArray.py can also be found within the "numeric" directory. If including this to my PYTHONPATH variable I can import RandomArray and also use some of the functions! But the function I am interested in is: multivariate_normal. When I try to use this function python stops responding, I have to kill python from outside! Is there a way to make this function work? Or maybe there is a quick workaround using functions in random and else? That would be really great! Cheers Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From asbach at ient.rwth-aachen.de Fri Jun 29 06:05:05 2007 From: asbach at ient.rwth-aachen.de (Mark Asbach) Date: Fri, 29 Jun 2007 12:05:05 +0200 Subject: [Numpy-discussion] [Image-SIG] Python equivalent of bwboundaries, bwlabel In-Reply-To: <07C6A61102C94148B8104D42DE95F7E8C8F2B1@exchange2k.envision.co.il> References: <07C6A61102C94148B8104D42DE95F7E8C8F2B1@exchange2k.envision.co.il> Message-ID: <211BB9FF-F8FE-497B-828F-5C8488A8EA19@ient.rwth-aachen.de> Hi Nadav, > There are several image processing function (including labeling) in > numpy.numarray.nd_image package. You can find nd_image > documentation in numarray-1.5.pdf (http://downloads.sourceforge.net/ > numpy/numarray-1.5.pdf?modtime=1133880381&big_mirror=0) >> I'm trying to do object segmentation from image slices, and have >> found the >> matlab functions bwlabel and bwboundaries. I haven't been able to >> find a >> python equivalent in the pylab, scipy, numpy, or Image modules, >> nor has >> google been fruitful. Could somebody point me in the right >> direction to >> find equivalent functions if they exist? Since I'm one of the maintainers, I'd like to point you to OpenCV, the Open Computer Vision Library, that has a python wrapper (including PIL and numpy adaptors). For object segmentation, you might be better off to take a library that features a large set of building blocks for issues like these. http://sf.net/projects/opencvlibrary Yours, Mark -- Mark Asbach Institut f?r Nachrichtentechnik, RWTH Aachen University http://www.ient.rwth-aachen.de/cms/team/m_asbach From ee05b077 at smail.iitm.ac.in Fri Jun 29 07:05:54 2007 From: ee05b077 at smail.iitm.ac.in (ANIRUDH VIJ) Date: Fri, 29 Jun 2007 17:35:54 +0630 Subject: [Numpy-discussion] Python equivalent of bwboundaries, bwlabel In-Reply-To: <07C6A61102C94148B8104D42DE95F7E8C8F2B1@exchange2k.envision.co.il> References: <07C6A61102C94148B8104D42DE95F7E8C8F2B1@exchange2k.envision.co.il> Message-ID: <20070629110516.M43509@smail.iitm.ac.in> OpenCV comes with python wrappers.That should help to do pretty much any image processing task. On Fri, 29 Jun 2007 12:15:10 +0300, Nadav Horesh wrote > There are several image processing function (including labeling) in > numpy.numarray.nd_image package. You can find nd_image documentation > in numarray-1.5.pdf (http://downloads.sourceforge.net/numpy/numarray- > 1.5.pdf?modtime=1133880381&big_mirror=0) > > Nadav. > > -----Original Message----- > From: numpy-discussion-bounces at scipy.org on behalf of Cameron Walsh > Sent: Fri 29-Jun-07 09:47 > To: numpy-discussion at scipy.org; image-sig at python.org > Cc: > Subject: [Numpy-discussion] Python equivalent of bwboundaries, bwlabel > > Hi all, > > I'm trying to do object segmentation from image slices, and have > found the matlab functions bwlabel and bwboundaries. I haven't been > able to find a python equivalent in the pylab, scipy, numpy, or > Image modules, nor has google been fruitful. Could somebody point > me in the right direction to find equivalent functions if they exist? > > Cameron. > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion --- Whatever you do will be insignificant, but it is very important that you do it. --- From john.c.cartwright at comcast.net Sat Jun 30 18:17:24 2007 From: john.c.cartwright at comcast.net (John Cartwright) Date: Sat, 30 Jun 2007 16:17:24 -0600 Subject: [Numpy-discussion] problem compiling v.1.0.3 on a Mac In-Reply-To: <4686C04A.4020001@gmail.com> References: <4D239651-778E-4CB6-A0CD-A0E48A53B6FE@comcast.net> <4686C04A.4020001@gmail.com> Message-ID: <6D4DE5A7-2C08-429E-B3E6-3273109F1425@comcast.net> Hi Robert, thanks for your prompt reply and for your offer of help. This is framework version of python 2.4.4 from python.org. I installed a standalone version of 2.5.1 and numpy seemed to compile OK. I've listed below the results of my attempts w/ 2.4.4. Thanks again! --john lynx:/usr/local/src/numpy-1.0.3 jcc$ python setup.py -v build Running from numpy source directory. non-existing path in 'numpy/distutils': 'site.cfg' F2PY Version 2_3844 blas_opt_info: ( library_dirs = /Library/Frameworks/Python.framework/Versions/2.4/ lib:/usr/local/lib:/usr/lib ) FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/ vecLib.framework/Headers'] lapack_opt_info: ( library_dirs = /Library/Frameworks/Python.framework/Versions/2.4/ lib:/usr/local/lib:/usr/lib ) FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands -- compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands -- fcompiler options running build_src building py_modules sources building extension "numpy.core.multiarray" sources Generating build/src.macosx-10.3-fat-2.4/numpy/core/config.h new_compiler returns distutils.unixccompiler.UnixCCompiler new_fcompiler returns numpy.distutils.fcompiler.nag.NAGFCompiler customize NAGFCompiler NAGFCompiler instance properties: archiver = ['ar', '-cr'] compile_switch = '-c' compiler_f77 = ['f95', '-fixed', '-O4', '-target=native'] compiler_f90 = ['f95', '-O4', '-target=native'] compiler_fix = ['f95', '-fixed', '-O4', '-target=native'] libraries = [] library_dirs = [] linker_so = ['f95', '-unsharedf95', '-Wl,-bundle,- flat_namespace,- undefined,suppress'] object_switch = '-o ' ranlib = ['ranlib'] version = None version_cmd = ['f95', '-V'] customize AbsoftFCompiler AbsoftFCompiler instance properties: archiver = ['ar', '-cr'] compile_switch = '-c' compiler_f77 = ['f77', '-N22', '-N90', '-N110', '-f', '-s', '-O'] compiler_f90 = ['f90', '-YCFRL=1', '-YCOM_NAMES=LCS', '- YCOM_PFX', '- YEXT_PFX', '-YCOM_SFX=_', '-YEXT_SFX=_', '- YEXT_NAMES=LCS', '-s', '-O'] compiler_fix = ['f90', '-YCFRL=1', '-YCOM_NAMES=LCS', '- YCOM_PFX', '- YEXT_PFX', '-YCOM_SFX=_', '-YEXT_SFX=_', '- YEXT_NAMES=LCS', '-f', 'fixed', '-YCFRL=1', '- YCOM_NAMES=LCS', '-YCOM_PFX', '-YEXT_PFX', '- YCOM_SFX=_', '-YEXT_SFX=_', '-YEXT_NAMES=LCS', '-s', '-O'] libraries = ['fio', 'f90math', 'fmath', 'U77'] library_dirs = [] linker_so = ['f90', '-K', 'shared'] object_switch = '-o ' ranlib = ['ranlib'] version = None version_cmd = ['f90', '-V -c /tmp/tmpjnaDyP__dummy.f -o /tmp/tmpjnaDyP__dummy.o'] customize IbmFCompiler IbmFCompiler instance properties: archiver = ['ar', '-cr'] compile_switch = '-c' compiler_f77 = ['xlf', '-qextname', '-O5'] compiler_f90 = ['xlf90', '-qextname', '-O5'] compiler_fix = ['xlf90', '-qfixed', '-qextname', '-O5'] libraries = [] library_dirs = [] linker_so = ['xlf95', '-Wl,-bundle,-flat_namespace,- undefined,suppress'] object_switch = '-o ' ranlib = ['ranlib'] version = None version_cmd = ['xlf', '-qversion'] customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 GnuFCompiler instance properties: archiver = ['ar', '-cr'] compile_switch = '-c' compiler_f77 = ['f77', '-g', '-Wall', '-fno-second-underscore', '- fPIC', '-O2', '-funroll-loops', '-mcpu=7450', '- mtune=7450'] compiler_f90 = None compiler_fix = None libraries = ['g2c', 'cc_dynamic'] library_dirs = [] linker_exe = ['f77', '-g', '-Wall'] linker_so = ['f77', '-g', '-Wall', '-undefined', 'dynamic_lookup', ' -bundle'] object_switch = '-o ' ranlib = ['ranlib'] version = None version_cmd = ['f77', '--version'] customize Gnu95FCompiler Could not locate executable f95 Gnu95FCompiler instance properties: archiver = ['ar', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/local/bin/gfortran', '-Wall', '-ffixed- form', '- fno-second-underscore', '-fPIC', '-O3', '- funroll-loops', '-mcpu=7450', '-mtune=7450'] compiler_f90 = ['/usr/local/bin/gfortran', '-Wall', '-fno-second- underscore', '-fPIC', '-O3', '-funroll-loops', '- mcpu=7450', '-mtune=7450'] compiler_fix = ['/usr/local/bin/gfortran', '-Wall', '-ffixed- form', '- fno-second-underscore', '-Wall', '-fno-second- underscore', '-fPIC', '-O3', '-funroll-loops', '-mcpu=7450', '- mtune=7450'] libraries = ['gfortran'] library_dirs = ['/usr/local/lib/gcc/powerpc-apple- darwin8.9.0/4.3.0'] linker_exe = ['/usr/local/bin/gfortran', '-Wall'] linker_so = ['/usr/local/bin/gfortran', '-Wall', '-undefined', 'dynamic_lookup', '-bundle'] object_switch = '-o ' ranlib = ['ranlib'] version = LooseVersion ('4.3.0') version_cmd = ['/usr/local/bin/gfortran', '--version'] customize Gnu95FCompiler Could not locate executable f95 customize Gnu95FCompiler using config C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-I/Library/Frameworks/Python.framework/Versions/2.4/ include/python2.4 -Inumpy/core/src -Inumpy/core/include -I/Library/ Frameworks/Python.framework/Versions/2.4/include/python2.4 -c' gcc: _configtest.c In file included from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:18:20: error: limits.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:21:2: error: #error "Something's broken. UCHAR_MAX should be defined in limits.h."/Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:25:2: error: #error "Python's source code assumes C's unsigned char is an 8-bit type." /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:32:19: error: stdio.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:34:5: error: #error "Python.h requires that stdio.h define NULL." /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:37:20: error: string.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:38:19: error: errno.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:39:20: error: stdlib.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:41:20: error: unistd.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:53:20: error: assert.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:55, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:7:20: error: stdint.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:55, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:73: error: parse error before 'Py_uintptr_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:73: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:74: error: parse error before 'Py_intptr_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:74: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:94:76: error: math.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:101:22: error: sys/time.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:102:18: error: time.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:120:24: error: sys/select.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:159:22: error: sys/stat.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:399:21: error: termios.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:400: warning: 'struct winsize' declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:400: warning: its scope is only this definition or declaration, which is probably not what you want /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:400: warning: 'struct termios' declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:401: warning: 'struct winsize' declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:401: warning: 'struct termios' declared inside parameter list In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:74, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pymem.h:50: warning: parameter names (without types) in function declaration /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pymem.h:51: error: parse error before 'size_t' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:76, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ object.h:227: error: parse error before 'FILE' In file included from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:18:20: error: limits.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:21:2: error: #error "Something's broken. UCHAR_MAX should be defined in limits.h."/Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:25:2: error: #error "Python's source code assumes C's unsigned char is an 8-bit type." /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:32:19: error: stdio.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:34:5: error: #error "Python.h requires that stdio.h define NULL." /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:37:20: error: string.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:38:19: error: errno.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:39:20: error: stdlib.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:41:20: error: unistd.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:53:20: error: assert.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:55, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:7:20: error: stdint.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:55, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:73: error: parse error before 'Py_uintptr_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:73: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:74: error: parse error before 'Py_intptr_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:74: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:94:76: error: math.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:101:22: error: sys/time.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:102:18: error: time.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:120:24: error: sys/select.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:159:22: error: sys/stat.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:399:21: error: termios.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:400: warning: 'struct winsize' declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:400: warning: its scope is only this definition or declaration, which is probably not what you want /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:400: warning: 'struct termios' declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:401: warning: 'struct winsize' declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:401: warning: 'struct termios' declared inside parameter list In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:74, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pymem.h:50: warning: parameter names (without types) in function declaration /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pymem.h:51: error: parse error before 'size_t' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:76, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ object.h:227: error: parse error before 'FILE' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ object.h:371: error: parse error before 'FILE' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:77, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ objimpl.h:97: warning: parameter names (without types) in function declaration /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ objimpl.h:98: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ objimpl.h:292: warning: parameter names (without types) in function declaration In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:81, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:55:19: error: ctype.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:118:21: error: wchar.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:81, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:511: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:529: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ object.h:371: error: parse error before 'FILE' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:77, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ objimpl.h:97: warning: parameter names (without types) in function declaration /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ objimpl.h:98: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ objimpl.h:292: warning: parameter names (without types) in function declaration In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:81, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:55:19: error: ctype.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:118:21: error: wchar.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:81, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:511: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:529: error: parse error before '*' token In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:84, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:63: error: parse error before '_PyLong_NumBits' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:63: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:79: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:102: error: parse error before 'size_t' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:90, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ stringobject.h:10:20: error: stdarg.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:90, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ stringobject.h:63: error: parse error before 'va_list' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:84, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:63: error: parse error before '_PyLong_NumBits' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:63: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:79: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:102: error: parse error before 'size_t' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:90, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ stringobject.h:10:20: error: stdarg.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:90, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ stringobject.h:63: error: parse error before 'va_list' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:101, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:12: error: parse error before 'FILE' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:12: warning: no semicolon at end of struct or union /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:15: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:28: error: parse error before '}' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:28: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:38: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:39: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:40: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:57: error: parse error before 'FILE' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:58: error: parse error before 'Py_UniversalNewlineFread' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:58: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:58: warning: data definition has no type or storage class In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:112, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyerrors.h:226: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyerrors.h:228: error: parse error before 'size_t' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:116, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ modsupport.h:20: error: parse error before 'va_list' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ modsupport.h:22: error: parse error before 'va_list' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ modsupport.h:23: error: parse error before 'va_list' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:117, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:32: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:33: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:35: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:36: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:40: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:41: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:42: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:43: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:44: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:45: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:46: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:49: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:55: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:59: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:60: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:64: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:66: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:82: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:123: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:125: error: parse error before '*' token In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:119, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ sysmodule.h:12: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ sysmodule.h:12: error: parse error before 'FILE' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ sysmodule.h:12: warning: data definition has no type or storage class In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:121, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ import.h:25: error: parse error before 'size_t' In file included from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:131: error: parse error before 'size_t' _configtest.c: In function 'main': _configtest.c:9: error: 'FILE' undeclared (first use in this function) _configtest.c:9: error: (Each undeclared identifier is reported only once _configtest.c:9: error: for each function it appears in.) _configtest.c:9: error: 'fp' undeclared (first use in this function) _configtest.c:17: warning: incompatible implicit declaration of built- in function 'fprintf' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:101, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:12: error: parse error before 'FILE' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:12: warning: no semicolon at end of struct or union /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:15: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:28: error: parse error before '}' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:28: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:38: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:39: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:40: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:57: error: parse error before 'FILE' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:58: error: parse error before 'Py_UniversalNewlineFread' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:58: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:58: warning: data definition has no type or storage class In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:112, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyerrors.h:226: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyerrors.h:228: error: parse error before 'size_t' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:116, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ modsupport.h:20: error: parse error before 'va_list' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ modsupport.h:22: error: parse error before 'va_list' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ modsupport.h:23: error: parse error before 'va_list' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:117, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:32: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:33: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:35: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:36: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:40: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:41: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:42: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:43: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:44: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:45: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:46: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:49: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:55: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:59: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:60: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:64: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:66: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:82: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:123: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:125: error: parse error before '*' token In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:119, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ sysmodule.h:12: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ sysmodule.h:12: error: parse error before 'FILE' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ sysmodule.h:12: warning: data definition has no type or storage class In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:121, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ import.h:25: error: parse error before 'size_t' In file included from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:131: error: parse error before 'size_t' _configtest.c: In function 'main': _configtest.c:9: error: 'FILE' undeclared (first use in this function) _configtest.c:9: error: (Each undeclared identifier is reported only once _configtest.c:9: error: for each function it appears in.) _configtest.c:9: error: 'fp' undeclared (first use in this function) _configtest.c:17: warning: incompatible implicit declaration of built- in function 'fprintf' lipo: can't figure out the architecture type of: /var/tmp//ccSbLVr4.out In file included from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:18:20: error: limits.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:21:2: error: #error "Something's broken. UCHAR_MAX should be defined in limits.h."/Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:25:2: error: #error "Python's source code assumes C's unsigned char is an 8-bit type." /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:32:19: error: stdio.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:34:5: error: #error "Python.h requires that stdio.h define NULL." /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:37:20: error: string.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:38:19: error: errno.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:39:20: error: stdlib.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:41:20: error: unistd.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:53:20: error: assert.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:55, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:7:20: error: stdint.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:55, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:73: error: parse error before 'Py_uintptr_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:73: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:74: error: parse error before 'Py_intptr_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:74: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:94:76: error: math.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:101:22: error: sys/time.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:102:18: error: time.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:120:24: error: sys/select.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:159:22: error: sys/stat.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:399:21: error: termios.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:400: warning: 'struct winsize' declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:400: warning: its scope is only this definition or declaration, which is probably not what you want /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:400: warning: 'struct termios' declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:401: warning: 'struct winsize' declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:401: warning: 'struct termios' declared inside parameter list In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:74, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pymem.h:50: warning: parameter names (without types) in function declaration /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pymem.h:51: error: parse error before 'size_t' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:76, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ object.h:227: error: parse error before 'FILE' In file included from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:18:20: error: limits.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:21:2: error: #error "Something's broken. UCHAR_MAX should be defined in limits.h."/Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:25:2: error: #error "Python's source code assumes C's unsigned char is an 8-bit type." /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:32:19: error: stdio.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:34:5: error: #error "Python.h requires that stdio.h define NULL." /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:37:20: error: string.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:38:19: error: errno.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:39:20: error: stdlib.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:41:20: error: unistd.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:53:20: error: assert.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:55, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:7:20: error: stdint.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:55, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:73: error: parse error before 'Py_uintptr_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:73: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:74: error: parse error before 'Py_intptr_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:74: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:94:76: error: math.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:101:22: error: sys/time.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:102:18: error: time.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:120:24: error: sys/select.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:159:22: error: sys/stat.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:399:21: error: termios.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:400: warning: 'struct winsize' declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:400: warning: its scope is only this definition or declaration, which is probably not what you want /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:400: warning: 'struct termios' declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:401: warning: 'struct winsize' declared inside parameter list /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyport.h:401: warning: 'struct termios' declared inside parameter list In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:74, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pymem.h:50: warning: parameter names (without types) in function declaration /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pymem.h:51: error: parse error before 'size_t' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:76, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ object.h:227: error: parse error before 'FILE' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ object.h:371: error: parse error before 'FILE' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:77, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ objimpl.h:97: warning: parameter names (without types) in function declaration /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ objimpl.h:98: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ objimpl.h:292: warning: parameter names (without types) in function declaration In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:81, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:55:19: error: ctype.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:118:21: error: wchar.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:81, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:511: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:529: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ object.h:371: error: parse error before 'FILE' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:77, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ objimpl.h:97: warning: parameter names (without types) in function declaration /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ objimpl.h:98: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ objimpl.h:292: warning: parameter names (without types) in function declaration In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:81, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:55:19: error: ctype.h: No such file or directory /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:118:21: error: wchar.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:81, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:511: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ unicodeobject.h:529: error: parse error before '*' token In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:84, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:63: error: parse error before '_PyLong_NumBits' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:63: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:79: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:102: error: parse error before 'size_t' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:90, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ stringobject.h:10:20: error: stdarg.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:90, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ stringobject.h:63: error: parse error before 'va_list' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:84, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:63: error: parse error before '_PyLong_NumBits' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:63: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:79: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ longobject.h:102: error: parse error before 'size_t' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:90, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ stringobject.h:10:20: error: stdarg.h: No such file or directory In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:90, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ stringobject.h:63: error: parse error before 'va_list' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:101, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:12: error: parse error before 'FILE' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:12: warning: no semicolon at end of struct or union /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:15: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:28: error: parse error before '}' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:28: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:38: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:39: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:40: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:57: error: parse error before 'FILE' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:58: error: parse error before 'Py_UniversalNewlineFread' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:58: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:58: warning: data definition has no type or storage class In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:112, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyerrors.h:226: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyerrors.h:228: error: parse error before 'size_t' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:116, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ modsupport.h:20: error: parse error before 'va_list' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ modsupport.h:22: error: parse error before 'va_list' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ modsupport.h:23: error: parse error before 'va_list' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:117, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:32: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:33: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:35: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:36: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:40: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:41: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:42: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:43: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:44: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:45: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:46: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:49: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:55: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:59: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:60: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:64: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:66: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:82: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:123: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:125: error: parse error before '*' token In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:119, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ sysmodule.h:12: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ sysmodule.h:12: error: parse error before 'FILE' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ sysmodule.h:12: warning: data definition has no type or storage class In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:121, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ import.h:25: error: parse error before 'size_t' In file included from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:131: error: parse error before 'size_t' _configtest.c: In function 'main': _configtest.c:9: error: 'FILE' undeclared (first use in this function) _configtest.c:9: error: (Each undeclared identifier is reported only once _configtest.c:9: error: for each function it appears in.) _configtest.c:9: error: 'fp' undeclared (first use in this function) _configtest.c:17: warning: incompatible implicit declaration of built- in function 'fprintf' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:101, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:12: error: parse error before 'FILE' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:12: warning: no semicolon at end of struct or union /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:15: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:28: error: parse error before '}' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:28: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:38: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:39: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:40: warning: data definition has no type or storage class /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:57: error: parse error before 'FILE' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:58: error: parse error before 'Py_UniversalNewlineFread' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:58: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ fileobject.h:58: warning: data definition has no type or storage class In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:112, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyerrors.h:226: error: parse error before 'size_t' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pyerrors.h:228: error: parse error before 'size_t' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:116, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ modsupport.h:20: error: parse error before 'va_list' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ modsupport.h:22: error: parse error before 'va_list' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ modsupport.h:23: error: parse error before 'va_list' In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:117, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:32: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:33: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:35: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:36: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:40: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:41: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:42: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:43: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:44: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:45: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:46: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:49: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:55: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:59: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:60: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:64: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:66: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:82: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:123: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ pythonrun.h:125: error: parse error before '*' token In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:119, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ sysmodule.h:12: error: parse error before '*' token /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ sysmodule.h:12: error: parse error before 'FILE' /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ sysmodule.h:12: warning: data definition has no type or storage class In file included from /Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4/Python.h:121, from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ import.h:25: error: parse error before 'size_t' In file included from _configtest.c:2: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4/ Python.h:131: error: parse error before 'size_t' _configtest.c: In function 'main': _configtest.c:9: error: 'FILE' undeclared (first use in this function) _configtest.c:9: error: (Each undeclared identifier is reported only once _configtest.c:9: error: for each function it appears in.) _configtest.c:9: error: 'fp' undeclared (first use in this function) _configtest.c:17: warning: incompatible implicit declaration of built- in function 'fprintf' lipo: can't figure out the architecture type of: /var/tmp//ccSbLVr4.out failure. removing: _configtest.c _configtest.o Traceback (most recent call last): File "setup.py", line 91, in ? setup_package() File "setup.py", line 84, in setup_package configuration=configuration ) File "/usr/local/src/numpy-1.0.3/numpy/distutils/core.py", line 173, in setup return old_setup(**new_attr) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/local/src/numpy-1.0.3/numpy/distutils/command/ build_src.py", line 123, in run self.build_sources() File "/usr/local/src/numpy-1.0.3/numpy/distutils/command/ build_src.py", line 142, in build_sources self.build_extension_sources(ext) File "/usr/local/src/numpy-1.0.3/numpy/distutils/command/ build_src.py", line 248, in build_extension_sources sources = self.generate_sources(sources, ext) File "/usr/local/src/numpy-1.0.3/numpy/distutils/command/ build_src.py", line 306, in generate_sources source = func(extension, build_dir) File "numpy/core/setup.py", line 53, in generate_config_h raise SystemError,"Failed to test configuration. "\ SystemError: Failed to test configuration. See previous error messages for more information. On Jun 30, 2007, at 2:42 PM, Robert Kern wrote: > John Cartwright wrote: >> Hello All, >> >> I'm having trouble compile on a Mac 10.4.10. It seems as if it's >> not finding /usr/include: >> >> ... >> from /Library/Frameworks/Python.framework/Versions/ >> 2.4/include/python2.4/Python.h:81, >> from _configtest.c:2: >> /usr/include/stdarg.h:4:25: error: stdarg.h: No such file or >> directory >> ... >> >> I tried setting the "CFLAG=-I/usr/include", but w/o success. Can >> anyone help me? > > It should build out-of-box. Is this the standard Python > distribution from > www.python.org? Check your environment variables. You should not > have CFLAGS or > LDFLAGS; these will overwrite the flags that are necessary for > building Python > extension modules. > > If that doesn't work, please give us the complete output of > > $ python setup.py -v build > > Thanks. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion